Test Report: Docker_Linux_crio 22061

                    
                      1c88f6d23ea396bf85affe6630893acb8f160428:2025-12-10:42722
                    
                

Test fail (29/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.25
44 TestAddons/parallel/Registry 13.58
45 TestAddons/parallel/RegistryCreds 0.4
46 TestAddons/parallel/Ingress 148.16
47 TestAddons/parallel/InspektorGadget 5.24
48 TestAddons/parallel/MetricsServer 5.33
50 TestAddons/parallel/CSI 42.78
51 TestAddons/parallel/Headlamp 2.5
52 TestAddons/parallel/CloudSpanner 5.27
53 TestAddons/parallel/LocalPath 8.11
54 TestAddons/parallel/NvidiaDevicePlugin 5.27
55 TestAddons/parallel/Yakd 5.25
56 TestAddons/parallel/AmdGpuDevicePlugin 5.26
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 2.37
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 2.3
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 2.28
294 TestJSONOutput/pause/Command 1.68
300 TestJSONOutput/unpause/Command 1.67
364 TestPause/serial/Pause 5.98
401 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.32
404 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.34
419 TestStartStop/group/old-k8s-version/serial/Pause 6.28
422 TestStartStop/group/no-preload/serial/Pause 5.82
427 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.35
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.15
435 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.09
444 TestStartStop/group/newest-cni/serial/Pause 6.37
456 TestStartStop/group/embed-certs/serial/Pause 6.39
460 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.55
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable volcano --alsologtostderr -v=1: exit status 11 (248.816426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:27:53.731432   18302 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:27:53.731761   18302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:27:53.731772   18302 out.go:374] Setting ErrFile to fd 2...
	I1210 22:27:53.731776   18302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:27:53.732014   18302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:27:53.732254   18302 mustload.go:66] Loading cluster: addons-713277
	I1210 22:27:53.732583   18302 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:27:53.732596   18302 addons.go:622] checking whether the cluster is paused
	I1210 22:27:53.732691   18302 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:27:53.732700   18302 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:27:53.733124   18302 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:27:53.751803   18302 ssh_runner.go:195] Run: systemctl --version
	I1210 22:27:53.751867   18302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:27:53.769476   18302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:27:53.864109   18302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:27:53.864205   18302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:27:53.892536   18302 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:27:53.892566   18302 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:27:53.892569   18302 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:27:53.892572   18302 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:27:53.892575   18302 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:27:53.892578   18302 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:27:53.892581   18302 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:27:53.892583   18302 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:27:53.892586   18302 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:27:53.892594   18302 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:27:53.892597   18302 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:27:53.892599   18302 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:27:53.892602   18302 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:27:53.892605   18302 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:27:53.892607   18302 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:27:53.892614   18302 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:27:53.892617   18302 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:27:53.892621   18302 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:27:53.892623   18302 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:27:53.892626   18302 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:27:53.892628   18302 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:27:53.892631   18302 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:27:53.892634   18302 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:27:53.892636   18302 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:27:53.892639   18302 cri.go:89] found id: ""
	I1210 22:27:53.892698   18302 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:27:53.906864   18302 out.go:203] 
	W1210 22:27:53.908160   18302 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:27:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:27:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:27:53.908179   18302 out.go:285] * 
	* 
	W1210 22:27:53.911170   18302 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:27:53.912621   18302 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.81329ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.001984643s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003459948s
addons_test.go:394: (dbg) Run:  kubectl --context addons-713277 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-713277 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-713277 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.110142521s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 ip
2025/12/10 22:28:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable registry --alsologtostderr -v=1: exit status 11 (259.922169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:17.091570   20923 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:17.091756   20923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:17.091766   20923 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:17.091770   20923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:17.092024   20923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:17.092266   20923 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:17.092598   20923 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:17.092614   20923 addons.go:622] checking whether the cluster is paused
	I1210 22:28:17.092712   20923 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:17.092724   20923 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:17.093177   20923 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:17.112436   20923 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:17.112484   20923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:17.131378   20923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:17.227987   20923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:17.228099   20923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:17.259264   20923 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:17.259284   20923 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:17.259289   20923 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:17.259310   20923 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:17.259314   20923 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:17.259317   20923 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:17.259320   20923 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:17.259323   20923 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:17.259327   20923 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:17.259334   20923 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:17.259339   20923 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:17.259344   20923 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:17.259359   20923 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:17.259364   20923 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:17.259394   20923 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:17.259410   20923 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:17.259416   20923 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:17.259420   20923 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:17.259423   20923 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:17.259426   20923 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:17.259431   20923 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:17.259437   20923 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:17.259442   20923 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:17.259449   20923 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:17.259454   20923 cri.go:89] found id: ""
	I1210 22:28:17.259515   20923 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:17.277796   20923 out.go:203] 
	W1210 22:28:17.279321   20923 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:17.279352   20923 out.go:285] * 
	* 
	W1210 22:28:17.284799   20923 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:17.286302   20923 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.456752ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-713277
addons_test.go:334: (dbg) Run:  kubectl --context addons-713277 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (241.717492ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:11.659688   20292 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:11.659938   20292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:11.659948   20292 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:11.659952   20292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:11.660160   20292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:11.660421   20292 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:11.660760   20292 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:11.660777   20292 addons.go:622] checking whether the cluster is paused
	I1210 22:28:11.660865   20292 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:11.660877   20292 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:11.661284   20292 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:11.679081   20292 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:11.679129   20292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:11.696727   20292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:11.792378   20292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:11.792454   20292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:11.824163   20292 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:11.824188   20292 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:11.824194   20292 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:11.824199   20292 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:11.824204   20292 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:11.824210   20292 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:11.824215   20292 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:11.824219   20292 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:11.824223   20292 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:11.824241   20292 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:11.824251   20292 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:11.824256   20292 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:11.824261   20292 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:11.824266   20292 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:11.824275   20292 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:11.824282   20292 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:11.824289   20292 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:11.824294   20292 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:11.824299   20292 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:11.824304   20292 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:11.824308   20292 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:11.824313   20292 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:11.824321   20292 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:11.824326   20292 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:11.824333   20292 cri.go:89] found id: ""
	I1210 22:28:11.824389   20292 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:11.837838   20292 out.go:203] 
	W1210 22:28:11.838925   20292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:11.838947   20292 out.go:285] * 
	* 
	W1210 22:28:11.842193   20292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:11.843337   20292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-713277 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-713277 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-713277 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [03139f6b-ddf3-4138-8883-7ffb5d32d717] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [03139f6b-ddf3-4138-8883-7ffb5d32d717] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003820455s
I1210 22:28:18.456855    8660 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.494192592s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-713277 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-713277
helpers_test.go:244: (dbg) docker inspect addons-713277:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500",
	        "Created": "2025-12-10T22:26:15.572898264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T22:26:15.616590435Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/hostname",
	        "HostsPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/hosts",
	        "LogPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500-json.log",
	        "Name": "/addons-713277",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-713277:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-713277",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500",
	                "LowerDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-713277",
	                "Source": "/var/lib/docker/volumes/addons-713277/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-713277",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-713277",
	                "name.minikube.sigs.k8s.io": "addons-713277",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e51c06d9b3b98d4fb9a4f1fd695018face312d4f7b89056e8352b7cf2797c772",
	            "SandboxKey": "/var/run/docker/netns/e51c06d9b3b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-713277": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68f994aacdfe48dfffec610a926ba1df2096191c6ae50bc5d7210533d5089584",
	                    "EndpointID": "2aa2a0bf0d9f61ec121d5d3a005cc507ea9f3d50dda319dd7df4184695365669",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "de:ac:fa:1e:f2:6c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-713277",
	                        "df9731acd91b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-713277 -n addons-713277
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-713277 logs -n 25: (1.156154371s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-479778 --alsologtostderr --binary-mirror http://127.0.0.1:46291 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-479778 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ -p binary-mirror-479778                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-479778 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ addons  │ enable dashboard -p addons-713277                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-713277                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ start   │ -p addons-713277 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:27 UTC │
	│ addons  │ addons-713277 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:27 UTC │                     │
	│ addons  │ addons-713277 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-713277 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-713277                                                                                                                                                                                                                                                                                                                                                                                           │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-713277 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ ip      │ addons-713277 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-713277 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ ssh     │ addons-713277 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ ssh     │ addons-713277 ssh cat /opt/local-path-provisioner/pvc-35088769-b195-4bfe-be10-c3ca9b48e87f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-713277 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ addons-713277 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ ip      │ addons-713277 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-713277        │ jenkins │ v1.37.0 │ 10 Dec 25 22:30 UTC │ 10 Dec 25 22:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:52.425347   10419 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:52.425618   10419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:52.425629   10419 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:52.425636   10419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:52.425870   10419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:25:52.426424   10419 out.go:368] Setting JSON to false
	I1210 22:25:52.427255   10419 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":494,"bootTime":1765405058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:52.427310   10419 start.go:143] virtualization: kvm guest
	I1210 22:25:52.429346   10419 out.go:179] * [addons-713277] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:25:52.430993   10419 notify.go:221] Checking for updates...
	I1210 22:25:52.431047   10419 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:25:52.432570   10419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:52.434025   10419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:25:52.435560   10419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:25:52.436856   10419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:25:52.438124   10419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:25:52.439601   10419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:52.464067   10419 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:25:52.464229   10419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:52.517830   10419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 22:25:52.508562141 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:52.517927   10419 docker.go:319] overlay module found
	I1210 22:25:52.519955   10419 out.go:179] * Using the docker driver based on user configuration
	I1210 22:25:52.521196   10419 start.go:309] selected driver: docker
	I1210 22:25:52.521213   10419 start.go:927] validating driver "docker" against <nil>
	I1210 22:25:52.521232   10419 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:25:52.521948   10419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:52.575958   10419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 22:25:52.566549943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:52.576142   10419 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:52.576384   10419 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:25:52.578116   10419 out.go:179] * Using Docker driver with root privileges
	I1210 22:25:52.579398   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:25:52.579459   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:25:52.579469   10419 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 22:25:52.579531   10419 start.go:353] cluster config:
	{Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:25:52.581231   10419 out.go:179] * Starting "addons-713277" primary control-plane node in "addons-713277" cluster
	I1210 22:25:52.582561   10419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 22:25:52.583787   10419 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 22:25:52.584973   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:25:52.585011   10419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 22:25:52.585021   10419 cache.go:65] Caching tarball of preloaded images
	I1210 22:25:52.585091   10419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 22:25:52.585113   10419 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 22:25:52.585125   10419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 22:25:52.585600   10419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json ...
	I1210 22:25:52.585626   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json: {Name:mk8319a125c2c8127427cf1b33cd61c4fd701213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:25:52.601544   10419 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 22:25:52.601686   10419 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 22:25:52.601712   10419 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 22:25:52.601719   10419 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 22:25:52.601730   10419 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 22:25:52.601740   10419 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1210 22:26:05.198371   10419 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 22:26:05.198406   10419 cache.go:243] Successfully downloaded all kic artifacts
	I1210 22:26:05.198444   10419 start.go:360] acquireMachinesLock for addons-713277: {Name:mkedaedeb4d270ce44212898da8a4cf27fda7401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:26:05.198537   10419 start.go:364] duration metric: took 76.687µs to acquireMachinesLock for "addons-713277"
	I1210 22:26:05.198560   10419 start.go:93] Provisioning new machine with config: &{Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:26:05.198634   10419 start.go:125] createHost starting for "" (driver="docker")
	I1210 22:26:05.200380   10419 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 22:26:05.200579   10419 start.go:159] libmachine.API.Create for "addons-713277" (driver="docker")
	I1210 22:26:05.200608   10419 client.go:173] LocalClient.Create starting
	I1210 22:26:05.200720   10419 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 22:26:05.229553   10419 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 22:26:05.383877   10419 cli_runner.go:164] Run: docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 22:26:05.401516   10419 cli_runner.go:211] docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 22:26:05.401592   10419 network_create.go:284] running [docker network inspect addons-713277] to gather additional debugging logs...
	I1210 22:26:05.401616   10419 cli_runner.go:164] Run: docker network inspect addons-713277
	W1210 22:26:05.417893   10419 cli_runner.go:211] docker network inspect addons-713277 returned with exit code 1
	I1210 22:26:05.417923   10419 network_create.go:287] error running [docker network inspect addons-713277]: docker network inspect addons-713277: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-713277 not found
	I1210 22:26:05.417937   10419 network_create.go:289] output of [docker network inspect addons-713277]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-713277 not found
	
	** /stderr **
	I1210 22:26:05.418040   10419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 22:26:05.435445   10419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10ef0}
	I1210 22:26:05.435492   10419 network_create.go:124] attempt to create docker network addons-713277 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 22:26:05.435532   10419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-713277 addons-713277
	I1210 22:26:05.482520   10419 network_create.go:108] docker network addons-713277 192.168.49.0/24 created
	I1210 22:26:05.482546   10419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-713277" container
	I1210 22:26:05.482607   10419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 22:26:05.500030   10419 cli_runner.go:164] Run: docker volume create addons-713277 --label name.minikube.sigs.k8s.io=addons-713277 --label created_by.minikube.sigs.k8s.io=true
	I1210 22:26:05.517311   10419 oci.go:103] Successfully created a docker volume addons-713277
	I1210 22:26:05.517378   10419 cli_runner.go:164] Run: docker run --rm --name addons-713277-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --entrypoint /usr/bin/test -v addons-713277:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 22:26:11.714265   10419 cli_runner.go:217] Completed: docker run --rm --name addons-713277-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --entrypoint /usr/bin/test -v addons-713277:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.19685051s)
	I1210 22:26:11.714303   10419 oci.go:107] Successfully prepared a docker volume addons-713277
	I1210 22:26:11.714361   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:11.714375   10419 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 22:26:11.714425   10419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-713277:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 22:26:15.504108   10419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-713277:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.789639473s)
	I1210 22:26:15.504136   10419 kic.go:203] duration metric: took 3.789756995s to extract preloaded images to volume ...
	W1210 22:26:15.504233   10419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 22:26:15.504282   10419 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 22:26:15.504322   10419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 22:26:15.556938   10419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-713277 --name addons-713277 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-713277 --network addons-713277 --ip 192.168.49.2 --volume addons-713277:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 22:26:15.864814   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Running}}
	I1210 22:26:15.883763   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:15.904331   10419 cli_runner.go:164] Run: docker exec addons-713277 stat /var/lib/dpkg/alternatives/iptables
	I1210 22:26:15.949808   10419 oci.go:144] the created container "addons-713277" has a running status.
	I1210 22:26:15.949839   10419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa...
	I1210 22:26:16.052971   10419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 22:26:16.079611   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:16.101589   10419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 22:26:16.101630   10419 kic_runner.go:114] Args: [docker exec --privileged addons-713277 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 22:26:16.144593   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:16.170501   10419 machine.go:94] provisionDockerMachine start ...
	I1210 22:26:16.170610   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.192598   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.192842   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.192861   10419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 22:26:16.333499   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-713277
	
	I1210 22:26:16.333526   10419 ubuntu.go:182] provisioning hostname "addons-713277"
	I1210 22:26:16.333588   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.352281   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.352567   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.352588   10419 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-713277 && echo "addons-713277" | sudo tee /etc/hostname
	I1210 22:26:16.496639   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-713277
	
	I1210 22:26:16.496736   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.516022   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.516236   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.516252   10419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-713277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-713277/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-713277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 22:26:16.648348   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 22:26:16.648373   10419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 22:26:16.648393   10419 ubuntu.go:190] setting up certificates
	I1210 22:26:16.648406   10419 provision.go:84] configureAuth start
	I1210 22:26:16.648459   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:16.665859   10419 provision.go:143] copyHostCerts
	I1210 22:26:16.665928   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 22:26:16.666100   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 22:26:16.666178   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 22:26:16.666232   10419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.addons-713277 san=[127.0.0.1 192.168.49.2 addons-713277 localhost minikube]
	I1210 22:26:16.710469   10419 provision.go:177] copyRemoteCerts
	I1210 22:26:16.710544   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 22:26:16.710581   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.727832   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:16.824146   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 22:26:16.843597   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 22:26:16.860761   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 22:26:16.877618   10419 provision.go:87] duration metric: took 229.197414ms to configureAuth
	I1210 22:26:16.877665   10419 ubuntu.go:206] setting minikube options for container-runtime
	I1210 22:26:16.877824   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:16.877919   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.895383   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.895628   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.895667   10419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 22:26:17.167347   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 22:26:17.167370   10419 machine.go:97] duration metric: took 996.846411ms to provisionDockerMachine
	I1210 22:26:17.167380   10419 client.go:176] duration metric: took 11.966764439s to LocalClient.Create
	I1210 22:26:17.167393   10419 start.go:167] duration metric: took 11.966813404s to libmachine.API.Create "addons-713277"
	I1210 22:26:17.167402   10419 start.go:293] postStartSetup for "addons-713277" (driver="docker")
	I1210 22:26:17.167414   10419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 22:26:17.167468   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 22:26:17.167500   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.185050   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.282726   10419 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 22:26:17.286271   10419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 22:26:17.286303   10419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 22:26:17.286313   10419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 22:26:17.286378   10419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 22:26:17.286401   10419 start.go:296] duration metric: took 118.993729ms for postStartSetup
	I1210 22:26:17.286686   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:17.303579   10419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json ...
	I1210 22:26:17.303876   10419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:26:17.303918   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.322057   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.414665   10419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 22:26:17.419214   10419 start.go:128] duration metric: took 12.220566574s to createHost
	I1210 22:26:17.419239   10419 start.go:83] releasing machines lock for "addons-713277", held for 12.220689836s
	I1210 22:26:17.419306   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:17.436962   10419 ssh_runner.go:195] Run: cat /version.json
	I1210 22:26:17.437011   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.437044   10419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 22:26:17.437148   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.455922   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.456535   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.600629   10419 ssh_runner.go:195] Run: systemctl --version
	I1210 22:26:17.606989   10419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 22:26:17.640746   10419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 22:26:17.645596   10419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 22:26:17.645671   10419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 22:26:17.671387   10419 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 22:26:17.671415   10419 start.go:496] detecting cgroup driver to use...
	I1210 22:26:17.671452   10419 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 22:26:17.671494   10419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 22:26:17.687165   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 22:26:17.698836   10419 docker.go:218] disabling cri-docker service (if available) ...
	I1210 22:26:17.698887   10419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 22:26:17.714851   10419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 22:26:17.732316   10419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 22:26:17.813688   10419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 22:26:17.902576   10419 docker.go:234] disabling docker service ...
	I1210 22:26:17.902633   10419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 22:26:17.920838   10419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 22:26:17.932986   10419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 22:26:18.010146   10419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 22:26:18.087243   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 22:26:18.099198   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 22:26:18.112743   10419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 22:26:18.112791   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.123242   10419 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 22:26:18.123300   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.132264   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.140869   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.149311   10419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 22:26:18.156986   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.164968   10419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.177899   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.186423   10419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 22:26:18.193425   10419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 22:26:18.193468   10419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 22:26:18.205335   10419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 22:26:18.212718   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:18.289834   10419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 22:26:18.412720   10419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 22:26:18.412801   10419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 22:26:18.416727   10419 start.go:564] Will wait 60s for crictl version
	I1210 22:26:18.416784   10419 ssh_runner.go:195] Run: which crictl
	I1210 22:26:18.420425   10419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 22:26:18.445488   10419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 22:26:18.445601   10419 ssh_runner.go:195] Run: crio --version
	I1210 22:26:18.472482   10419 ssh_runner.go:195] Run: crio --version
	I1210 22:26:18.501975   10419 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 22:26:18.503362   10419 cli_runner.go:164] Run: docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 22:26:18.520622   10419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 22:26:18.524800   10419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:18.534947   10419 kubeadm.go:884] updating cluster {Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 22:26:18.535053   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:18.535099   10419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:18.567183   10419 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 22:26:18.567203   10419 crio.go:433] Images already preloaded, skipping extraction
	I1210 22:26:18.567244   10419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:18.591688   10419 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 22:26:18.591709   10419 cache_images.go:86] Images are preloaded, skipping loading
	I1210 22:26:18.591716   10419 kubeadm.go:935] updating node { 192.168.49.2  8443 v1.34.2 crio true true} ...
	I1210 22:26:18.591809   10419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-713277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 22:26:18.591871   10419 ssh_runner.go:195] Run: crio config
	I1210 22:26:18.634659   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:26:18.634686   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:26:18.634707   10419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 22:26:18.634740   10419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-713277 NodeName:addons-713277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 22:26:18.634865   10419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-713277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 22:26:18.634944   10419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 22:26:18.642986   10419 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 22:26:18.643044   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 22:26:18.650936   10419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 22:26:18.662949   10419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 22:26:18.677549   10419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 22:26:18.689823   10419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 22:26:18.693330   10419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:18.703193   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:18.782490   10419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:26:18.808038   10419 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277 for IP: 192.168.49.2
	I1210 22:26:18.808061   10419 certs.go:195] generating shared ca certs ...
	I1210 22:26:18.808080   10419 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.808241   10419 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 22:26:18.929357   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt ...
	I1210 22:26:18.929388   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt: {Name:mkbd30d3b4f4ba5b83e216c0671eb91421516806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.929584   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key ...
	I1210 22:26:18.929596   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key: {Name:mk81df2d47aaadeaa0810edca18da86636f14941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.929720   10419 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 22:26:18.960945   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt ...
	I1210 22:26:18.960981   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt: {Name:mk2f7a78b774462d65488c066902bd3b0099fa43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.961122   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key ...
	I1210 22:26:18.961138   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key: {Name:mkdc055b7f3370167cae79e8d6f08805a0012de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.961202   10419 certs.go:257] generating profile certs ...
	I1210 22:26:18.961253   10419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key
	I1210 22:26:18.961266   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt with IP's: []
	I1210 22:26:19.103303   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt ...
	I1210 22:26:19.103330   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: {Name:mk733fd208879e3efff97dfc66c558c69ea74288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.103492   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key ...
	I1210 22:26:19.103503   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key: {Name:mk76c5b17ff6c8180ef7f8e7d7d0b263573cf628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.103562   10419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187
	I1210 22:26:19.103580   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 22:26:19.156810   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 ...
	I1210 22:26:19.156838   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187: {Name:mk7e413d79e4f129c2878b25cf1750e72f209dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.156984   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187 ...
	I1210 22:26:19.156997   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187: {Name:mkcce4a4fcc035feffe0502e113eaaf30a4baa10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.157063   10419 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt
	I1210 22:26:19.157147   10419 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key
	I1210 22:26:19.157197   10419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key
	I1210 22:26:19.157216   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt with IP's: []
	I1210 22:26:19.322243   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt ...
	I1210 22:26:19.322274   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt: {Name:mk31c1cddd242b12293e2e5d6f788ae2f5bfa861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.322437   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key ...
	I1210 22:26:19.322448   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key: {Name:mk1991af609f07d13443419da545d786acfe061b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.322678   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 22:26:19.322718   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 22:26:19.322746   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 22:26:19.322774   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 22:26:19.323320   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 22:26:19.341374   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 22:26:19.358896   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 22:26:19.376131   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 22:26:19.393105   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 22:26:19.409785   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 22:26:19.427202   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 22:26:19.444354   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 22:26:19.461701   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 22:26:19.480844   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 22:26:19.493605   10419 ssh_runner.go:195] Run: openssl version
	I1210 22:26:19.499867   10419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.507418   10419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 22:26:19.517335   10419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.521031   10419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.521077   10419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.554819   10419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 22:26:19.562368   10419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 22:26:19.569855   10419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 22:26:19.573384   10419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 22:26:19.573453   10419 kubeadm.go:401] StartCluster: {Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:19.573522   10419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:26:19.573572   10419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:26:19.600554   10419 cri.go:89] found id: ""
	I1210 22:26:19.600622   10419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 22:26:19.608784   10419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 22:26:19.616864   10419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 22:26:19.616925   10419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 22:26:19.624342   10419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 22:26:19.624357   10419 kubeadm.go:158] found existing configuration files:
	
	I1210 22:26:19.624396   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 22:26:19.631550   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 22:26:19.631603   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 22:26:19.638586   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 22:26:19.645994   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 22:26:19.646041   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 22:26:19.653332   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 22:26:19.661148   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 22:26:19.661202   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 22:26:19.668325   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 22:26:19.675848   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 22:26:19.675901   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 22:26:19.683531   10419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 22:26:19.721889   10419 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 22:26:19.721954   10419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 22:26:19.741420   10419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 22:26:19.741482   10419 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 22:26:19.741521   10419 kubeadm.go:319] OS: Linux
	I1210 22:26:19.741578   10419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 22:26:19.741668   10419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 22:26:19.741742   10419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 22:26:19.741840   10419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 22:26:19.741930   10419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 22:26:19.742019   10419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 22:26:19.742109   10419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 22:26:19.742173   10419 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 22:26:19.797102   10419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 22:26:19.797238   10419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 22:26:19.797385   10419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 22:26:19.804102   10419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 22:26:19.807767   10419 out.go:252]   - Generating certificates and keys ...
	I1210 22:26:19.807878   10419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 22:26:19.807974   10419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 22:26:20.052789   10419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 22:26:20.454343   10419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 22:26:20.710808   10419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 22:26:20.790906   10419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 22:26:21.026521   10419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 22:26:21.026637   10419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-713277 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 22:26:21.120054   10419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 22:26:21.120200   10419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-713277 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 22:26:21.358275   10419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 22:26:21.513225   10419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 22:26:21.668168   10419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 22:26:21.668244   10419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 22:26:21.901278   10419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 22:26:22.093038   10419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 22:26:22.113948   10419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 22:26:22.327098   10419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 22:26:22.641040   10419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 22:26:22.641512   10419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 22:26:22.645178   10419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 22:26:22.646753   10419 out.go:252]   - Booting up control plane ...
	I1210 22:26:22.646867   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 22:26:22.646962   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 22:26:22.647474   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 22:26:22.673197   10419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 22:26:22.673365   10419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 22:26:22.679806   10419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 22:26:22.679993   10419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 22:26:22.680066   10419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 22:26:22.771794   10419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 22:26:22.771954   10419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 22:26:23.272356   10419 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.033849ms
	I1210 22:26:23.275083   10419 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 22:26:23.275213   10419 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 22:26:23.275338   10419 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 22:26:23.275475   10419 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 22:26:24.669547   10419 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.394397155s
	I1210 22:26:25.858206   10419 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.583207862s
	I1210 22:26:26.777217   10419 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502134857s
	I1210 22:26:26.793794   10419 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 22:26:26.802972   10419 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 22:26:26.811755   10419 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 22:26:26.811927   10419 kubeadm.go:319] [mark-control-plane] Marking the node addons-713277 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 22:26:26.819519   10419 kubeadm.go:319] [bootstrap-token] Using token: 54ecna.64azq7impk1jbwgg
	I1210 22:26:26.821536   10419 out.go:252]   - Configuring RBAC rules ...
	I1210 22:26:26.821692   10419 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 22:26:26.824622   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 22:26:26.829510   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 22:26:26.831792   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 22:26:26.835022   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 22:26:26.837207   10419 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 22:26:27.182782   10419 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 22:26:27.596560   10419 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 22:26:28.182900   10419 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 22:26:28.183774   10419 kubeadm.go:319] 
	I1210 22:26:28.183838   10419 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 22:26:28.183862   10419 kubeadm.go:319] 
	I1210 22:26:28.183955   10419 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 22:26:28.183963   10419 kubeadm.go:319] 
	I1210 22:26:28.183986   10419 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 22:26:28.184055   10419 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 22:26:28.184166   10419 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 22:26:28.184178   10419 kubeadm.go:319] 
	I1210 22:26:28.184244   10419 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 22:26:28.184253   10419 kubeadm.go:319] 
	I1210 22:26:28.184318   10419 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 22:26:28.184342   10419 kubeadm.go:319] 
	I1210 22:26:28.184434   10419 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 22:26:28.184545   10419 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 22:26:28.184636   10419 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 22:26:28.184676   10419 kubeadm.go:319] 
	I1210 22:26:28.184776   10419 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 22:26:28.184864   10419 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 22:26:28.184873   10419 kubeadm.go:319] 
	I1210 22:26:28.184991   10419 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 54ecna.64azq7impk1jbwgg \
	I1210 22:26:28.185122   10419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 22:26:28.185164   10419 kubeadm.go:319] 	--control-plane 
	I1210 22:26:28.185176   10419 kubeadm.go:319] 
	I1210 22:26:28.185248   10419 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 22:26:28.185254   10419 kubeadm.go:319] 
	I1210 22:26:28.185323   10419 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 54ecna.64azq7impk1jbwgg \
	I1210 22:26:28.185415   10419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 22:26:28.187470   10419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 22:26:28.187617   10419 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 22:26:28.187669   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:26:28.187683   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:26:28.189573   10419 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 22:26:28.190835   10419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 22:26:28.194973   10419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 22:26:28.194988   10419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 22:26:28.209273   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 22:26:28.414382   10419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 22:26:28.414448   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:28.414519   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-713277 minikube.k8s.io/updated_at=2025_12_10T22_26_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-713277 minikube.k8s.io/primary=true
	I1210 22:26:28.424007   10419 ops.go:34] apiserver oom_adj: -16
	I1210 22:26:28.489317   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:28.990150   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:29.489976   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:29.989447   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:30.489522   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:30.990356   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:31.490330   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:31.989977   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:32.490035   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:32.989399   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:33.489372   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:33.553166   10419 kubeadm.go:1114] duration metric: took 5.1387664s to wait for elevateKubeSystemPrivileges
	I1210 22:26:33.553216   10419 kubeadm.go:403] duration metric: took 13.979775684s to StartCluster
	I1210 22:26:33.553240   10419 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:33.553362   10419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:26:33.553783   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:33.553949   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 22:26:33.553975   10419 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:26:33.554037   10419 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 22:26:33.554148   10419 addons.go:70] Setting yakd=true in profile "addons-713277"
	I1210 22:26:33.554177   10419 addons.go:70] Setting inspektor-gadget=true in profile "addons-713277"
	I1210 22:26:33.554182   10419 addons.go:239] Setting addon yakd=true in "addons-713277"
	I1210 22:26:33.554198   10419 addons.go:239] Setting addon inspektor-gadget=true in "addons-713277"
	I1210 22:26:33.554204   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:33.554221   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554228   10419 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-713277"
	I1210 22:26:33.554241   10419 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-713277"
	I1210 22:26:33.554254   10419 addons.go:70] Setting cloud-spanner=true in profile "addons-713277"
	I1210 22:26:33.554264   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554273   10419 addons.go:239] Setting addon cloud-spanner=true in "addons-713277"
	I1210 22:26:33.554265   10419 addons.go:70] Setting default-storageclass=true in profile "addons-713277"
	I1210 22:26:33.554292   10419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-713277"
	I1210 22:26:33.554290   10419 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-713277"
	I1210 22:26:33.554308   10419 addons.go:70] Setting ingress=true in profile "addons-713277"
	I1210 22:26:33.554332   10419 addons.go:239] Setting addon ingress=true in "addons-713277"
	I1210 22:26:33.554350   10419 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-713277"
	I1210 22:26:33.554355   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554359   10419 addons.go:70] Setting registry-creds=true in profile "addons-713277"
	I1210 22:26:33.554372   10419 addons.go:239] Setting addon registry-creds=true in "addons-713277"
	I1210 22:26:33.554379   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554391   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554391   10419 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-713277"
	I1210 22:26:33.554418   10419 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-713277"
	I1210 22:26:33.554450   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554747   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554805   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554818   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554839   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554843   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554862   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554986   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.555786   10419 addons.go:70] Setting gcp-auth=true in profile "addons-713277"
	I1210 22:26:33.555815   10419 mustload.go:66] Loading cluster: addons-713277
	I1210 22:26:33.556003   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:33.556281   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557287   10419 addons.go:70] Setting storage-provisioner=true in profile "addons-713277"
	I1210 22:26:33.554298   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557409   10419 addons.go:70] Setting metrics-server=true in profile "addons-713277"
	I1210 22:26:33.557623   10419 addons.go:239] Setting addon metrics-server=true in "addons-713277"
	I1210 22:26:33.557760   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557424   10419 addons.go:70] Setting volumesnapshots=true in profile "addons-713277"
	I1210 22:26:33.557905   10419 addons.go:239] Setting addon volumesnapshots=true in "addons-713277"
	I1210 22:26:33.558361   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.558399   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557459   10419 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-713277"
	I1210 22:26:33.558677   10419 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-713277"
	I1210 22:26:33.558850   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.558942   10419 out.go:179] * Verifying Kubernetes components...
	I1210 22:26:33.559839   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.558952   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557495   10419 addons.go:70] Setting ingress-dns=true in profile "addons-713277"
	I1210 22:26:33.560195   10419 addons.go:239] Setting addon ingress-dns=true in "addons-713277"
	I1210 22:26:33.560231   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554221   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.561101   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557540   10419 addons.go:70] Setting registry=true in profile "addons-713277"
	I1210 22:26:33.563161   10419 addons.go:239] Setting addon registry=true in "addons-713277"
	I1210 22:26:33.563197   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.563599   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:33.557554   10419 addons.go:239] Setting addon storage-provisioner=true in "addons-713277"
	I1210 22:26:33.564301   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557469   10419 addons.go:70] Setting volcano=true in profile "addons-713277"
	I1210 22:26:33.565397   10419 addons.go:239] Setting addon volcano=true in "addons-713277"
	I1210 22:26:33.565429   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.565976   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.568830   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.569025   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.570143   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.604966   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 22:26:33.606939   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 22:26:33.610420   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.617188   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 22:26:33.619666   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 22:26:33.622411   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 22:26:33.622487   10419 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 22:26:33.623847   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 22:26:33.625138   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 22:26:33.625528   10419 addons.go:239] Setting addon default-storageclass=true in "addons-713277"
	I1210 22:26:33.625582   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.625756   10419 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:26:33.625771   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 22:26:33.625830   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.626239   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.627855   10419 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 22:26:33.627980   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 22:26:33.631375   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 22:26:33.631397   10419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 22:26:33.631454   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.631628   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 22:26:33.631637   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 22:26:33.631697   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.639960   10419 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 22:26:33.645317   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 22:26:33.647494   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:33.647504   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 22:26:33.647523   10419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 22:26:33.647530   10419 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 22:26:33.647596   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.647724   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 22:26:33.647739   10419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 22:26:33.647804   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.649164   10419 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:26:33.649181   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 22:26:33.649229   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.651933   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:33.652076   10419 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 22:26:33.653694   10419 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 22:26:33.654844   10419 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 22:26:33.654865   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 22:26:33.654925   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.655552   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 22:26:33.656795   10419 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:26:33.656815   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 22:26:33.656857   10419 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:26:33.656862   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.656869   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 22:26:33.656917   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.656797   10419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 22:26:33.663614   10419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:26:33.663679   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 22:26:33.663797   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.668911   10419 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-713277"
	I1210 22:26:33.668954   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.669438   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.670735   10419 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 22:26:33.671997   10419 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 22:26:33.672021   10419 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:26:33.672055   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 22:26:33.672135   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.674511   10419 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 22:26:33.674769   10419 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:26:33.674800   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 22:26:33.674963   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.677541   10419 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 22:26:33.678748   10419 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 22:26:33.678807   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 22:26:33.678907   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	W1210 22:26:33.682005   10419 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 22:26:33.698349   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.716793   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.717545   10419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 22:26:33.717562   10419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 22:26:33.717625   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.726088   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.728224   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.730653   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.731187   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.732046   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.732874   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.744606   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 22:26:33.749668   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.751251   10419 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 22:26:33.751823   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.754759   10419 out.go:179]   - Using image docker.io/busybox:stable
	I1210 22:26:33.756776   10419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:26:33.756800   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 22:26:33.756856   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.758807   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.760708   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.763582   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.765552   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.792948   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.803320   10419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:26:33.890053   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 22:26:33.890097   10419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 22:26:33.896429   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 22:26:33.896449   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 22:26:33.896473   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:26:33.910306   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 22:26:33.910331   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 22:26:33.913233   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:26:33.913852   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 22:26:33.913870   10419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 22:26:33.927941   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 22:26:33.927966   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 22:26:33.937403   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:26:33.943213   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:26:33.944673   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:26:33.946692   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 22:26:33.950563   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:26:33.953085   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 22:26:33.954434   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:26:33.955273   10419 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 22:26:33.955293   10419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 22:26:33.955847   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 22:26:33.955862   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 22:26:33.961102   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 22:26:33.961122   10419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 22:26:33.964593   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 22:26:33.964611   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 22:26:33.971909   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:26:33.974544   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 22:26:33.974615   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 22:26:33.992825   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 22:26:33.992936   10419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 22:26:34.007962   10419 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:26:34.007982   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 22:26:34.009673   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 22:26:34.009748   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 22:26:34.037869   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:26:34.037905   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 22:26:34.038411   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 22:26:34.038430   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 22:26:34.056182   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:26:34.056205   10419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 22:26:34.060383   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:26:34.080594   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 22:26:34.080615   10419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 22:26:34.093968   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:26:34.107665   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 22:26:34.107698   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 22:26:34.124071   10419 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 22:26:34.125830   10419 node_ready.go:35] waiting up to 6m0s for node "addons-713277" to be "Ready" ...
	I1210 22:26:34.134235   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:26:34.143738   10419 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:34.143758   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 22:26:34.176332   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 22:26:34.176358   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 22:26:34.207875   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:34.247706   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 22:26:34.247733   10419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 22:26:34.340846   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 22:26:34.340959   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 22:26:34.412939   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 22:26:34.413023   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 22:26:34.463862   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:26:34.463890   10419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 22:26:34.526146   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:26:34.631503   10419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-713277" context rescaled to 1 replicas
	I1210 22:26:35.123226   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.168760027s)
	I1210 22:26:35.123269   10419 addons.go:495] Verifying addon ingress=true in "addons-713277"
	I1210 22:26:35.123604   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.151663997s)
	I1210 22:26:35.123668   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.0632392s)
	I1210 22:26:35.123697   10419 addons.go:495] Verifying addon registry=true in "addons-713277"
	I1210 22:26:35.123814   10419 addons.go:495] Verifying addon metrics-server=true in "addons-713277"
	I1210 22:26:35.123743   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.029684117s)
	I1210 22:26:35.125013   10419 out.go:179] * Verifying registry addon...
	I1210 22:26:35.125016   10419 out.go:179] * Verifying ingress addon...
	I1210 22:26:35.126222   10419 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-713277 service yakd-dashboard -n yakd-dashboard
	
	I1210 22:26:35.128223   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 22:26:35.128318   10419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 22:26:35.131145   10419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 22:26:35.131163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:35.131342   10419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 22:26:35.131357   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:35.490404   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.282421489s)
	W1210 22:26:35.490449   10419 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:26:35.490482   10419 retry.go:31] will retry after 148.382508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:26:35.490721   10419 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-713277"
	I1210 22:26:35.496063   10419 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 22:26:35.498303   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 22:26:35.502385   10419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 22:26:35.502409   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:35.630805   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:35.630981   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:35.638986   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:36.002182   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:36.128460   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:36.130732   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:36.130857   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:36.501710   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:36.631235   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:36.631280   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:37.001951   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:37.131017   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:37.131091   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:37.501098   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:37.630498   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:37.630668   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:38.001762   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:38.105801   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.466768693s)
	I1210 22:26:38.131408   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:38.131587   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:38.501920   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:38.628261   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:38.630689   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:38.630751   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:39.001181   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:39.131421   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:39.131597   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:39.501903   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:39.631396   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:39.631460   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:40.001301   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:40.130982   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:40.131031   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:40.501896   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:40.628984   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:40.630875   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:40.630992   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.001320   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:41.131554   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:41.131699   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.227234   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 22:26:41.227296   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:41.245487   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:41.347137   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 22:26:41.359810   10419 addons.go:239] Setting addon gcp-auth=true in "addons-713277"
	I1210 22:26:41.359861   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:41.360222   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:41.377359   10419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 22:26:41.377419   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:41.394997   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:41.489913   10419 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 22:26:41.491394   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:41.492577   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 22:26:41.492593   10419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 22:26:41.501740   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:41.505952   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 22:26:41.505974   10419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 22:26:41.518961   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:26:41.518983   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 22:26:41.531416   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:26:41.631105   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:41.631296   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.828981   10419 addons.go:495] Verifying addon gcp-auth=true in "addons-713277"
	I1210 22:26:41.830199   10419 out.go:179] * Verifying gcp-auth addon...
	I1210 22:26:41.832298   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 22:26:41.834446   10419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 22:26:41.834462   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:42.001289   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:42.130905   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:42.131006   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:42.335503   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:42.501486   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:42.629042   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:42.631024   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:42.631266   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:42.835112   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:43.001631   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:43.131329   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:43.131409   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:43.334939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:43.501706   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:43.630417   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:43.630638   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:43.835052   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:44.001748   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:44.131163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:44.131278   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:44.335934   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:44.501952   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:44.630419   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:44.630565   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:44.835036   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:45.001706   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:45.129303   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:45.131003   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:45.131223   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:45.335772   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:45.501721   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:45.630953   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:45.631148   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:45.835451   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:46.001240   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:46.130747   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:46.130987   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:46.335314   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:46.502175   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:46.630242   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:46.630349   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:46.834776   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:47.001387   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:47.130936   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:47.131098   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:47.335540   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:47.501391   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:47.629270   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:47.630765   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:47.630949   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:47.835942   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:48.001543   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:48.130860   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:48.130972   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:48.335570   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:48.502002   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:48.630390   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:48.630492   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:48.835085   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:49.001721   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:49.131136   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:49.131324   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:49.334990   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:49.501449   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:49.631007   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:49.631242   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:49.835683   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:50.001434   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:50.129091   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:50.130797   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:50.131138   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:50.335592   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:50.501703   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:50.631440   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:50.631585   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:50.835152   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:51.001865   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:51.130862   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:51.131213   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:51.335671   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:51.501071   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:51.630674   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:51.630760   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:51.835360   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:52.000806   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:52.129275   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:52.131154   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:52.131207   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:52.335700   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:52.501797   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:52.631096   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:52.631272   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:52.834667   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:53.001340   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:53.131016   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:53.131105   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:53.335744   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:53.501474   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:53.631317   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:53.631622   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:53.834958   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:54.001519   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:54.129424   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:54.131323   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:54.131550   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:54.334939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:54.501942   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:54.630345   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:54.630585   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:54.834939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:55.001896   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:55.130504   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:55.130677   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:55.334967   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:55.501726   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:55.631158   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:55.631289   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:55.835803   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:56.001265   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:56.130725   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:56.130905   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:56.335222   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:56.502099   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:56.628680   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:56.630491   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:56.630577   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:56.835430   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:57.001177   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:57.130509   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:57.130761   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:57.335107   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:57.502094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:57.630506   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:57.630604   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:57.834821   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:58.001780   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:58.130925   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:58.130990   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:58.335550   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:58.501273   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:58.628796   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:58.630630   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:58.630831   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:58.835434   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:59.001613   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:59.130946   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:59.131214   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:59.335524   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:59.501167   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:59.630199   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:59.630351   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:59.834758   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:00.001205   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:00.130324   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:00.130509   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:00.334669   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:00.501602   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:00.629221   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:00.630750   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:00.630952   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:00.835438   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:01.000867   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:01.131049   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:01.131269   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:01.335719   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:01.501354   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:01.630511   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:01.630739   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:01.835224   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:02.001867   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:02.131143   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:02.131245   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:02.335505   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:02.501425   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:02.630759   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:02.630886   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:02.835414   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:03.000676   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:03.128972   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:03.130885   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:03.130903   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:03.335470   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:03.501242   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:03.630469   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:03.630607   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:03.835190   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:04.001760   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:04.131098   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:04.131291   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:04.334870   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:04.501620   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:04.630659   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:04.631055   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:04.835743   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:05.001163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:05.130545   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:05.130709   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:05.335268   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:05.501290   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:05.628874   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:05.630478   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:05.630603   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:05.835137   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:06.001704   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:06.131153   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:06.131158   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:06.335636   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:06.501316   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:06.630617   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:06.630848   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:06.835249   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:07.001701   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:07.131296   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:07.131450   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:07.335137   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:07.500758   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:07.629578   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:07.631374   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:07.631404   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:07.834833   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:08.001444   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:08.130869   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:08.131038   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:08.335789   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:08.501518   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:08.630825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:08.631031   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:08.835619   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:09.001444   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:09.130658   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:09.130817   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:09.335420   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:09.501039   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:09.630318   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:09.630417   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:09.834987   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:10.001625   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:10.129362   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:10.131054   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:10.131195   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:10.334813   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:10.501441   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:10.630729   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:10.630881   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:10.835505   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:11.001121   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:11.130447   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:11.130684   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:11.335438   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:11.501002   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:11.631268   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:11.631539   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:11.834959   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:12.001542   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:12.130732   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:12.130937   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:12.335493   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:12.501058   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:12.628731   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:12.630424   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:12.630586   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:12.835257   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:13.001767   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:13.130877   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:13.130944   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:13.335416   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:13.501111   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:13.630517   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:13.630731   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:13.835106   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:14.006819   10419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 22:27:14.006846   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:14.128858   10419 node_ready.go:49] node "addons-713277" is "Ready"
	I1210 22:27:14.128893   10419 node_ready.go:38] duration metric: took 40.003040499s for node "addons-713277" to be "Ready" ...
	I1210 22:27:14.128910   10419 api_server.go:52] waiting for apiserver process to appear ...
	I1210 22:27:14.128985   10419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:27:14.130787   10419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 22:27:14.130808   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:14.130962   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:14.144802   10419 api_server.go:72] duration metric: took 40.590791282s to wait for apiserver process to appear ...
	I1210 22:27:14.144833   10419 api_server.go:88] waiting for apiserver healthz status ...
	I1210 22:27:14.144857   10419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 22:27:14.148901   10419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 22:27:14.149819   10419 api_server.go:141] control plane version: v1.34.2
	I1210 22:27:14.149845   10419 api_server.go:131] duration metric: took 5.004057ms to wait for apiserver health ...
	I1210 22:27:14.149856   10419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 22:27:14.152842   10419 system_pods.go:59] 20 kube-system pods found
	I1210 22:27:14.152872   10419 system_pods.go:61] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending
	I1210 22:27:14.152883   10419 system_pods.go:61] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.152895   10419 system_pods.go:61] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.152908   10419 system_pods.go:61] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.152918   10419 system_pods.go:61] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending
	I1210 22:27:14.152930   10419 system_pods.go:61] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.152938   10419 system_pods.go:61] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.152943   10419 system_pods.go:61] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.152951   10419 system_pods.go:61] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.152964   10419 system_pods.go:61] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.152972   10419 system_pods.go:61] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.152982   10419 system_pods.go:61] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.152987   10419 system_pods.go:61] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.152995   10419 system_pods.go:61] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending
	I1210 22:27:14.153006   10419 system_pods.go:61] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.153020   10419 system_pods.go:61] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.153029   10419 system_pods.go:61] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending
	I1210 22:27:14.153040   10419 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.153049   10419 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending
	I1210 22:27:14.153059   10419 system_pods.go:61] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.153070   10419 system_pods.go:74] duration metric: took 3.206813ms to wait for pod list to return data ...
	I1210 22:27:14.153080   10419 default_sa.go:34] waiting for default service account to be created ...
	I1210 22:27:14.155059   10419 default_sa.go:45] found service account: "default"
	I1210 22:27:14.155079   10419 default_sa.go:55] duration metric: took 1.98908ms for default service account to be created ...
	I1210 22:27:14.155089   10419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 22:27:14.159275   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.159305   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending
	I1210 22:27:14.159315   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.159325   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.159335   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.159345   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending
	I1210 22:27:14.159351   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.159357   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.159362   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.159369   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.159382   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.159394   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.159402   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.159412   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.159418   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending
	I1210 22:27:14.159434   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.159445   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.159450   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending
	I1210 22:27:14.159458   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.159464   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending
	I1210 22:27:14.159472   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.159488   10419 retry.go:31] will retry after 222.081605ms: missing components: kube-dns
	I1210 22:27:14.336445   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:14.443843   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.443888   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:14.443899   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.443910   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.443920   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.443929   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:14.443937   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.443944   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.443950   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.443956   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.443965   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.443971   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.443977   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.443986   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.443997   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:14.444013   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.444021   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.444032   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:14.444046   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.444058   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.444069   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.444087   10419 retry.go:31] will retry after 259.327228ms: missing components: kube-dns
	I1210 22:27:14.538036   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:14.638269   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:14.638311   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:14.741312   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.741351   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:14.741360   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Running
	I1210 22:27:14.741372   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.741381   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.741390   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:14.741397   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.741403   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.741409   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.741414   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.741422   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.741427   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.741434   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.741441   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.741451   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:14.741459   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.741468   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.741478   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:14.741485   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.741496   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.741501   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Running
	I1210 22:27:14.741511   10419 system_pods.go:126] duration metric: took 586.415394ms to wait for k8s-apps to be running ...
	I1210 22:27:14.741521   10419 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 22:27:14.741571   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:27:14.760225   10419 system_svc.go:56] duration metric: took 18.694973ms WaitForService to wait for kubelet
	I1210 22:27:14.760257   10419 kubeadm.go:587] duration metric: took 41.206250301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:27:14.760294   10419 node_conditions.go:102] verifying NodePressure condition ...
	I1210 22:27:14.764445   10419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 22:27:14.764475   10419 node_conditions.go:123] node cpu capacity is 8
	I1210 22:27:14.764492   10419 node_conditions.go:105] duration metric: took 4.192291ms to run NodePressure ...
	I1210 22:27:14.764508   10419 start.go:242] waiting for startup goroutines ...
	I1210 22:27:14.839990   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:15.002305   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:15.133555   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:15.134816   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:15.335431   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:15.501588   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:15.631440   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:15.631566   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:15.835038   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:16.002025   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:16.131482   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:16.131626   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:16.335592   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:16.501825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:16.631963   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:16.632133   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:16.835958   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:17.002342   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:17.132359   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:17.132360   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:17.335275   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:17.501616   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:17.631699   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:17.632107   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:17.836259   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:18.002237   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:18.132295   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:18.132325   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:18.335803   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:18.502186   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:18.632569   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:18.632785   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:18.835358   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:19.001693   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:19.131702   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:19.131715   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:19.335345   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:19.501811   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:19.631491   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:19.631496   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:19.834907   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:20.002161   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:20.132727   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.132761   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:20.335730   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:20.502380   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:20.631604   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.631639   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:20.835923   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:21.002564   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:21.131310   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.131542   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.335938   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:21.501868   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:21.631717   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.631869   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.835786   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:22.002871   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.134280   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.134325   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.336693   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:22.501443   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.631761   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.631999   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.836038   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.002381   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.131212   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.131345   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.336125   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.502024   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.632094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.632150   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.836249   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.002308   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.131790   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.131875   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.335442   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.501833   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.635040   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.635051   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.835713   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.002021   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.132093   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.132181   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.336032   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.501865   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.631450   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.631552   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.836416   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.001682   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.131734   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.131850   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.335239   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.502492   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.631047   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.631291   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.834842   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.002146   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.131689   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.131742   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.335205   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.501579   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.633784   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.633885   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.835799   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.001824   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.131781   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.131826   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.335296   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.501025   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.656919   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.701841   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.835825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.001879   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.131708   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.131901   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.335627   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.501971   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.632292   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.632314   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.836295   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.001856   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.131812   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.131948   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.336094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.502476   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.631347   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.631447   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.835082   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.002341   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.131303   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.131404   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.335899   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.502775   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.630510   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.630807   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.835048   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.001874   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.131674   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.131725   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.335117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.502487   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.642245   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.642309   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.946955   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.049254   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.131974   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.132073   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.335725   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.501870   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.634535   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.634831   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.835206   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.002536   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.130926   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.131028   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.335284   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.502687   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.631427   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.631661   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.835324   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.002090   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.131601   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.131633   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.334957   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.501686   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.631555   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.631905   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.835828   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.002794   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.133977   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.134361   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.335241   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.502469   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.631289   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.631332   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.835638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.063873   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.132166   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.132197   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.334937   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.502320   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.631957   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.632044   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.835922   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.002136   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.131638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.131917   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.335139   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.501932   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.631871   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.631920   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.835826   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:39.002280   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:39.132311   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:39.132354   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:39.335638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:39.501568   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:39.631117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:39.631308   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:39.836503   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.002390   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.132510   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.132806   10419 kapi.go:107] duration metric: took 1m5.004584186s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 22:27:40.335397   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.502087   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.631503   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.835403   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.002060   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.131615   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.335091   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.503132   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.631798   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.835933   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.002314   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.132470   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.335243   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.501545   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.632070   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.835805   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.002191   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.132511   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.334853   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.502366   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.631246   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.836635   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.002753   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.131603   10419 kapi.go:107] duration metric: took 1m9.003281378s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 22:27:44.341292   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.501900   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.835768   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.002688   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.335528   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.501751   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.835788   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.003479   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.336187   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.502310   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.834956   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.002594   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.335771   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.502724   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.835061   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.002276   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.334922   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.502303   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.835895   10419 kapi.go:107] duration metric: took 1m7.003598985s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 22:27:48.837742   10419 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-713277 cluster.
	I1210 22:27:48.839097   10419 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 22:27:48.840503   10419 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 22:27:49.002132   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:49.502735   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.002509   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.502389   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.002117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.501767   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:52.002451   10419 kapi.go:107] duration metric: took 1m16.504148561s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 22:27:52.004598   10419 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, registry-creds, storage-provisioner, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 22:27:52.006147   10419 addons.go:530] duration metric: took 1m18.452106426s for enable addons: enabled=[nvidia-device-plugin ingress-dns registry-creds storage-provisioner cloud-spanner amd-gpu-device-plugin inspektor-gadget default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 22:27:52.006198   10419 start.go:247] waiting for cluster config update ...
	I1210 22:27:52.006225   10419 start.go:256] writing updated cluster config ...
	I1210 22:27:52.006484   10419 ssh_runner.go:195] Run: rm -f paused
	I1210 22:27:52.010377   10419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:27:52.013500   10419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7vb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.017235   10419 pod_ready.go:94] pod "coredns-66bc5c9577-q7vb5" is "Ready"
	I1210 22:27:52.017253   10419 pod_ready.go:86] duration metric: took 3.734284ms for pod "coredns-66bc5c9577-q7vb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.018927   10419 pod_ready.go:83] waiting for pod "etcd-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.022048   10419 pod_ready.go:94] pod "etcd-addons-713277" is "Ready"
	I1210 22:27:52.022065   10419 pod_ready.go:86] duration metric: took 3.120005ms for pod "etcd-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.023537   10419 pod_ready.go:83] waiting for pod "kube-apiserver-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.027982   10419 pod_ready.go:94] pod "kube-apiserver-addons-713277" is "Ready"
	I1210 22:27:52.028001   10419 pod_ready.go:86] duration metric: took 4.448174ms for pod "kube-apiserver-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.029594   10419 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.414295   10419 pod_ready.go:94] pod "kube-controller-manager-addons-713277" is "Ready"
	I1210 22:27:52.414324   10419 pod_ready.go:86] duration metric: took 384.71098ms for pod "kube-controller-manager-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.614169   10419 pod_ready.go:83] waiting for pod "kube-proxy-mtnxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.013425   10419 pod_ready.go:94] pod "kube-proxy-mtnxn" is "Ready"
	I1210 22:27:53.013451   10419 pod_ready.go:86] duration metric: took 399.26052ms for pod "kube-proxy-mtnxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.214674   10419 pod_ready.go:83] waiting for pod "kube-scheduler-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.613718   10419 pod_ready.go:94] pod "kube-scheduler-addons-713277" is "Ready"
	I1210 22:27:53.613745   10419 pod_ready.go:86] duration metric: took 399.042807ms for pod "kube-scheduler-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.613761   10419 pod_ready.go:40] duration metric: took 1.60335508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:27:53.656248   10419 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 22:27:53.659036   10419 out.go:179] * Done! kubectl is now configured to use "addons-713277" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.400453079Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zkdbf/POD" id=88b7a603-0449-41cd-9a1c-d1a070cf1cd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.400534272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.406913105Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zkdbf Namespace:default ID:936fad188dfd62638f67136c98f126165342b17124fd113d3d6cf3aab6b78fec UID:a47506e0-aff6-4a7d-9215-b65f72a42c2a NetNS:/var/run/netns/532dba9e-def7-4cd2-b156-014442dd0b1c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b608}] Aliases:map[]}"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.406946754Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zkdbf to CNI network \"kindnet\" (type=ptp)"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.418252699Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zkdbf Namespace:default ID:936fad188dfd62638f67136c98f126165342b17124fd113d3d6cf3aab6b78fec UID:a47506e0-aff6-4a7d-9215-b65f72a42c2a NetNS:/var/run/netns/532dba9e-def7-4cd2-b156-014442dd0b1c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b608}] Aliases:map[]}"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.41837326Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zkdbf for CNI network kindnet (type=ptp)"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.419276134Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.420058613Z" level=info msg="Ran pod sandbox 936fad188dfd62638f67136c98f126165342b17124fd113d3d6cf3aab6b78fec with infra container: default/hello-world-app-5d498dc89-zkdbf/POD" id=88b7a603-0449-41cd-9a1c-d1a070cf1cd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.421337673Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=25786d28-9e46-4376-b150-74a74668635a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.421460461Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=25786d28-9e46-4376-b150-74a74668635a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.42149023Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=25786d28-9e46-4376-b150-74a74668635a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.422126976Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=f241f0b7-461a-4a96-95e3-c0c5fa73292c name=/runtime.v1.ImageService/PullImage
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.428182611Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.881711908Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=f241f0b7-461a-4a96-95e3-c0c5fa73292c name=/runtime.v1.ImageService/PullImage
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.882343054Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f99e07ef-3aeb-4440-9cdc-e2a0b00821f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.884099396Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1520bb55-aac5-44cd-87eb-b20a3be80ebb name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.88814871Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-zkdbf/hello-world-app" id=45e163a3-2191-4b0f-a426-81e43b189521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.888294391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.894177506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.894400567Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4c1d07d213862207aed164b6a8ea6ae5e90e5ca6d8562955af95db788f70c26c/merged/etc/passwd: no such file or directory"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.894436004Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4c1d07d213862207aed164b6a8ea6ae5e90e5ca6d8562955af95db788f70c26c/merged/etc/group: no such file or directory"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.894744428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.923071949Z" level=info msg="Created container 683e0b7525463a21b13a745c5978ea85a6da01936d660cb404faaf72af31e6cc: default/hello-world-app-5d498dc89-zkdbf/hello-world-app" id=45e163a3-2191-4b0f-a426-81e43b189521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.92381045Z" level=info msg="Starting container: 683e0b7525463a21b13a745c5978ea85a6da01936d660cb404faaf72af31e6cc" id=8dd9f92a-2d35-42f6-b313-ea5455de8afd name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 22:30:34 addons-713277 crio[775]: time="2025-12-10T22:30:34.925663459Z" level=info msg="Started container" PID=9504 containerID=683e0b7525463a21b13a745c5978ea85a6da01936d660cb404faaf72af31e6cc description=default/hello-world-app-5d498dc89-zkdbf/hello-world-app id=8dd9f92a-2d35-42f6-b313-ea5455de8afd name=/runtime.v1.RuntimeService/StartContainer sandboxID=936fad188dfd62638f67136c98f126165342b17124fd113d3d6cf3aab6b78fec
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	683e0b7525463       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   936fad188dfd6       hello-world-app-5d498dc89-zkdbf             default
	c2be865d75697       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   06bbda0690244       registry-creds-764b6fb674-dkzdq             kube-system
	0885d49bf09fa       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   437be86b0db4a       nginx                                       default
	9f42c59cb2d29       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   f3100f483947a       busybox                                     default
	4c9bba5f39f38       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	74c081e28286c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	48438e9e3f252       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	ae6e6a01168ed       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   8d6764f406a8e       gcp-auth-78565c9fb4-xcp2p                   gcp-auth
	2163a8cf9861c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	5abf94cf2bb20       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   f87838fd8c17e       gadget-9zvtj                                gadget
	2450e25bed015       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	a99aabaa871bf       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   fd748af321177       ingress-nginx-controller-85d4c799dd-f4mfr   ingress-nginx
	d7718e5cd6534       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   2969902d131d3       ingress-nginx-admission-patch-5hp7s         ingress-nginx
	165ba560b21ce       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   fc647661f67a3       registry-proxy-tlfbx                        kube-system
	28bfb1217531d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   70aa7b2637f08       nvidia-device-plugin-daemonset-xz7l5        kube-system
	8fc592d7667df       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   62bbcb9c1106b       amd-gpu-device-plugin-9zlkh                 kube-system
	1db25aab3edc4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	04ea9bfa0bce4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   6bee991f8107e       csi-hostpath-attacher-0                     kube-system
	6ae10e6bd3d43       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   5ff9db8ed6958       snapshot-controller-7d9fbc56b8-rwmd4        kube-system
	979e705cc3192       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   f61fde7b45534       snapshot-controller-7d9fbc56b8-w5cgz        kube-system
	079244ec7bd48       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   516e758d4c1aa       csi-hostpath-resizer-0                      kube-system
	a327a461cdc90       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   35a15ea941931       yakd-dashboard-5ff678cb9-7ccv7              yakd-dashboard
	aca086177b901       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   58fda7a7f0615       ingress-nginx-admission-create-8t8lk        ingress-nginx
	d3ada68a097ba       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   dbcc79f1967b3       registry-6b586f9694-95ck7                   kube-system
	da0b5c6014eca       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   9bd519cc4093f       local-path-provisioner-648f6765c9-nzlgq     local-path-storage
	0f28ecb799a2a       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   c72c25acc12f6       cloud-spanner-emulator-5bdddb765-lw7mn      default
	bb607f8a94b39       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   c62554d2dd6a7       metrics-server-85b7d694d7-f8kpc             kube-system
	b4b4d4119a9e0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   c860bd0c9e388       kube-ingress-dns-minikube                   kube-system
	32ba87316889a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   750ec8d05964f       coredns-66bc5c9577-q7vb5                    kube-system
	1823e7451c0fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   ce83b3201bb4b       storage-provisioner                         kube-system
	23b97f2410dd1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   128858036ffe3       kube-proxy-mtnxn                            kube-system
	bef4905bf4d28       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   1d65c854a47b4       kindnet-cjq4d                               kube-system
	41f1ac5834be0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   7a839a1013562       kube-scheduler-addons-713277                kube-system
	a19c2cf65ed7f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   e0c7ec1eb0b65       kube-apiserver-addons-713277                kube-system
	5f60ada2aeca2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   240cd6deb42b6       etcd-addons-713277                          kube-system
	7e9f40ca0ad08       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   7f705f8105d84       kube-controller-manager-addons-713277       kube-system
	
	
	==> coredns [32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca] <==
	[INFO] 10.244.0.22:53409 - 38271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152617s
	[INFO] 10.244.0.22:51103 - 57088 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005809622s
	[INFO] 10.244.0.22:47589 - 37312 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008694252s
	[INFO] 10.244.0.22:50642 - 61793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005299168s
	[INFO] 10.244.0.22:46570 - 3705 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006843894s
	[INFO] 10.244.0.22:47458 - 32974 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006133263s
	[INFO] 10.244.0.22:56120 - 61020 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006177065s
	[INFO] 10.244.0.22:40838 - 5122 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001005577s
	[INFO] 10.244.0.22:40789 - 46888 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002263515s
	[INFO] 10.244.0.25:37685 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236213s
	[INFO] 10.244.0.25:56465 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160555s
	[INFO] 10.244.0.28:51205 - 35415 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000192896s
	[INFO] 10.244.0.28:57560 - 48733 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000245524s
	[INFO] 10.244.0.28:50510 - 22368 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000131947s
	[INFO] 10.244.0.28:47434 - 5273 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000184345s
	[INFO] 10.244.0.28:59578 - 13959 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000111343s
	[INFO] 10.244.0.28:56943 - 40301 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000139251s
	[INFO] 10.244.0.28:57588 - 44514 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006840039s
	[INFO] 10.244.0.28:50202 - 33059 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006968335s
	[INFO] 10.244.0.28:57871 - 65513 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004934199s
	[INFO] 10.244.0.28:53642 - 27410 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005466111s
	[INFO] 10.244.0.28:36925 - 8191 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004772889s
	[INFO] 10.244.0.28:53060 - 3396 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005112044s
	[INFO] 10.244.0.28:36999 - 34692 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001624518s
	[INFO] 10.244.0.28:50622 - 36675 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001683129s
	
	
	==> describe nodes <==
	Name:               addons-713277
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-713277
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=addons-713277
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_26_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-713277
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-713277"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:26:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-713277
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 22:30:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 22:28:29 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 22:28:29 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 22:28:29 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 22:28:29 +0000   Wed, 10 Dec 2025 22:27:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-713277
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                d2f88722-02be-4edd-a9d7-2da89e9e84d9
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     cloud-spanner-emulator-5bdddb765-lw7mn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  default                     hello-world-app-5d498dc89-zkdbf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-9zvtj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  gcp-auth                    gcp-auth-78565c9fb4-xcp2p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-f4mfr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m
	  kube-system                 amd-gpu-device-plugin-9zlkh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 coredns-66bc5c9577-q7vb5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpathplugin-hswm7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 etcd-addons-713277                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m9s
	  kube-system                 kindnet-cjq4d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m3s
	  kube-system                 kube-apiserver-addons-713277                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-addons-713277        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-mtnxn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-addons-713277                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-f8kpc              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m1s
	  kube-system                 nvidia-device-plugin-daemonset-xz7l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 registry-6b586f9694-95ck7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-creds-764b6fb674-dkzdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-proxy-tlfbx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-rwmd4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 snapshot-controller-7d9fbc56b8-w5cgz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  local-path-storage          local-path-provisioner-648f6765c9-nzlgq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7ccv7               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node addons-713277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node addons-713277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x8 over 4m12s)  kubelet          Node addons-713277 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s                   kubelet          Node addons-713277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s                   kubelet          Node addons-713277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s                   kubelet          Node addons-713277 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m3s                   node-controller  Node addons-713277 event: Registered Node addons-713277 in Controller
	  Normal  NodeReady                3m22s                  kubelet          Node addons-713277 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.094266] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026475] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.925136] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 22:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +1.053538] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +1.023850] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +1.024862] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +1.022903] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +2.047799] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +4.031530] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	
	
	==> etcd [5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef] <==
	{"level":"warn","ts":"2025-12-10T22:26:24.682572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.689093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.696289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.711900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.718452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.724792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.731401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.739247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.745608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.771659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.779273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.787277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.840303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:35.860147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:35.866702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.267209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.274667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.292770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.305603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:32.945163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.375226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:27:32.945268Z","caller":"traceutil/trace.go:172","msg":"trace[1392252888] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1106; }","duration":"110.488177ms","start":"2025-12-10T22:27:32.834765Z","end":"2025-12-10T22:27:32.945253Z","steps":["trace[1392252888] 'agreement among raft nodes before linearized reading'  (duration: 47.616744ms)","trace[1392252888] 'range keys from in-memory index tree'  (duration: 62.733551ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:32.945308Z","caller":"traceutil/trace.go:172","msg":"trace[1033802621] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"182.215784ms","start":"2025-12-10T22:27:32.763076Z","end":"2025-12-10T22:27:32.945291Z","steps":["trace[1033802621] 'process raft request'  (duration: 119.349642ms)","trace[1033802621] 'compare'  (duration: 62.652756ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:36.834312Z","caller":"traceutil/trace.go:172","msg":"trace[1650918149] transaction","detail":"{read_only:false; response_revision:1137; number_of_response:1; }","duration":"179.39717ms","start":"2025-12-10T22:27:36.654884Z","end":"2025-12-10T22:27:36.834281Z","steps":["trace[1650918149] 'process raft request'  (duration: 96.594074ms)","trace[1650918149] 'compare'  (duration: 82.557407ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:37.061966Z","caller":"traceutil/trace.go:172","msg":"trace[833499783] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"148.365728ms","start":"2025-12-10T22:27:36.913581Z","end":"2025-12-10T22:27:37.061947Z","steps":["trace[833499783] 'process raft request'  (duration: 81.527996ms)","trace[833499783] 'compare'  (duration: 66.715264ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:49.988342Z","caller":"traceutil/trace.go:172","msg":"trace[1051301466] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"134.205396ms","start":"2025-12-10T22:27:49.854113Z","end":"2025-12-10T22:27:49.988319Z","steps":["trace[1051301466] 'process raft request'  (duration: 51.901972ms)","trace[1051301466] 'compare'  (duration: 82.070747ms)"],"step_count":2}
	
	
	==> gcp-auth [ae6e6a01168ed89fd4ee6ba681bee9a03fc8fc0d6654dc4ceaddd87cef212eff] <==
	2025/12/10 22:27:48 GCP Auth Webhook started!
	2025/12/10 22:27:53 Ready to marshal response ...
	2025/12/10 22:27:53 Ready to write response ...
	2025/12/10 22:27:54 Ready to marshal response ...
	2025/12/10 22:27:54 Ready to write response ...
	2025/12/10 22:27:54 Ready to marshal response ...
	2025/12/10 22:27:54 Ready to write response ...
	2025/12/10 22:28:09 Ready to marshal response ...
	2025/12/10 22:28:09 Ready to write response ...
	2025/12/10 22:28:13 Ready to marshal response ...
	2025/12/10 22:28:13 Ready to write response ...
	2025/12/10 22:28:17 Ready to marshal response ...
	2025/12/10 22:28:17 Ready to write response ...
	2025/12/10 22:28:17 Ready to marshal response ...
	2025/12/10 22:28:17 Ready to write response ...
	2025/12/10 22:28:17 Ready to marshal response ...
	2025/12/10 22:28:17 Ready to write response ...
	2025/12/10 22:28:25 Ready to marshal response ...
	2025/12/10 22:28:25 Ready to write response ...
	2025/12/10 22:28:43 Ready to marshal response ...
	2025/12/10 22:28:43 Ready to write response ...
	2025/12/10 22:30:34 Ready to marshal response ...
	2025/12/10 22:30:34 Ready to write response ...
	
	
	==> kernel <==
	 22:30:35 up 12 min,  0 user,  load average: 0.28, 0.60, 0.31
	Linux addons-713277 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc] <==
	I1210 22:28:33.791201       1 main.go:301] handling current node
	I1210 22:28:43.790881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:28:43.790924       1 main.go:301] handling current node
	I1210 22:28:53.791891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:28:53.791935       1 main.go:301] handling current node
	I1210 22:29:03.790858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:03.790892       1 main.go:301] handling current node
	I1210 22:29:13.791341       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:13.791370       1 main.go:301] handling current node
	I1210 22:29:23.799474       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:23.799506       1 main.go:301] handling current node
	I1210 22:29:33.794870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:33.794900       1 main.go:301] handling current node
	I1210 22:29:43.797685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:43.797716       1 main.go:301] handling current node
	I1210 22:29:53.792778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:29:53.792817       1 main.go:301] handling current node
	I1210 22:30:03.799003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:30:03.799047       1 main.go:301] handling current node
	I1210 22:30:13.794253       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:30:13.794288       1 main.go:301] handling current node
	I1210 22:30:23.799404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:30:23.799437       1 main.go:301] handling current node
	I1210 22:30:33.793811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:30:33.793846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b] <==
	 > logger="UnhandledError"
	E1210 22:27:22.593073       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.192.9:443: connect: connection refused" logger="UnhandledError"
	E1210 22:27:22.598282       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.192.9:443: connect: connection refused" logger="UnhandledError"
	W1210 22:27:23.594725       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 22:27:23.594745       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 22:27:23.594783       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 22:27:23.594802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1210 22:27:23.594827       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 22:27:23.595959       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 22:27:25.321344       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1210 22:27:27.625191       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 22:27:27.625218       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1210 22:27:27.625238       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 22:28:03.325332       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53324: use of closed network connection
	E1210 22:28:03.468678       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53358: use of closed network connection
	I1210 22:28:09.227668       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 22:28:09.445077       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.35.22"}
	I1210 22:28:24.415475       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 22:30:34.162686       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.92.245"}
	
	
	==> kube-controller-manager [7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326] <==
	I1210 22:26:32.249487       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 22:26:32.249569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 22:26:32.249774       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 22:26:32.249875       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 22:26:32.249890       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 22:26:32.249876       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 22:26:32.249960       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 22:26:32.250212       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 22:26:32.250230       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 22:26:32.250272       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 22:26:32.252514       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 22:26:32.255831       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 22:26:32.259220       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 22:26:32.264420       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 22:26:32.271889       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:26:32.275034       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1210 22:26:34.894323       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 22:27:02.260595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 22:27:02.260726       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 22:27:02.260774       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 22:27:02.281625       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 22:27:02.284897       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 22:27:02.361821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 22:27:02.385199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:27:17.207788       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36] <==
	I1210 22:26:33.365088       1 server_linux.go:53] "Using iptables proxy"
	I1210 22:26:33.430279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 22:26:33.530452       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 22:26:33.530529       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 22:26:33.530604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:26:33.550193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 22:26:33.550249       1 server_linux.go:132] "Using iptables Proxier"
	I1210 22:26:33.556424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:26:33.563054       1 server.go:527] "Version info" version="v1.34.2"
	I1210 22:26:33.563389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:26:33.565348       1 config.go:200] "Starting service config controller"
	I1210 22:26:33.566205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:26:33.566131       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:26:33.566975       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:26:33.566143       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:26:33.567131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:26:33.566923       1 config.go:309] "Starting node config controller"
	I1210 22:26:33.567222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:26:33.669126       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 22:26:33.669218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 22:26:33.669333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:26:33.672847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459] <==
	I1210 22:26:25.850165       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:26:25.851788       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:26:25.851812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:26:25.852185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 22:26:25.852216       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 22:26:25.855231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 22:26:25.855253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:26:25.855303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:26:25.855430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:26:25.855434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:26:25.855866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:26:25.856103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:26:25.856114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 22:26:25.856285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 22:26:25.856401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 22:26:25.856523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:26:25.856623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:26:25.856683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 22:26:25.856703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 22:26:25.856881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 22:26:25.857096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 22:26:25.857099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 22:26:25.857171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 22:26:25.857172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1210 22:26:27.351924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 22:28:43 addons-713277 kubelet[1285]: I1210 22:28:43.960796    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=0.96077687 podStartE2EDuration="960.77687ms" podCreationTimestamp="2025-12-10 22:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 22:28:43.960717597 +0000 UTC m=+136.627420773" watchObservedRunningTime="2025-12-10 22:28:43.96077687 +0000 UTC m=+136.627480048"
	Dec 10 22:28:49 addons-713277 kubelet[1285]: I1210 22:28:49.412764    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xz7l5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.611486    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-gcp-creds\") pod \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\" (UID: \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\") "
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.611612    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "508c2fd7-d5ee-4be4-8ea0-db1f0360438b" (UID: "508c2fd7-d5ee-4be4-8ea0-db1f0360438b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.611638    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9513092e-d617-11f0-9d83-3a75c33b0089\") pod \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\" (UID: \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\") "
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.611722    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sf7f\" (UniqueName: \"kubernetes.io/projected/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-kube-api-access-4sf7f\") pod \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\" (UID: \"508c2fd7-d5ee-4be4-8ea0-db1f0360438b\") "
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.611867    1285 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-gcp-creds\") on node \"addons-713277\" DevicePath \"\""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.614251    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-kube-api-access-4sf7f" (OuterVolumeSpecName: "kube-api-access-4sf7f") pod "508c2fd7-d5ee-4be4-8ea0-db1f0360438b" (UID: "508c2fd7-d5ee-4be4-8ea0-db1f0360438b"). InnerVolumeSpecName "kube-api-access-4sf7f". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.614675    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9513092e-d617-11f0-9d83-3a75c33b0089" (OuterVolumeSpecName: "task-pv-storage") pod "508c2fd7-d5ee-4be4-8ea0-db1f0360438b" (UID: "508c2fd7-d5ee-4be4-8ea0-db1f0360438b"). InnerVolumeSpecName "pvc-005d811b-03c2-4c30-808a-b450672398d2". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.712438    1285 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-005d811b-03c2-4c30-808a-b450672398d2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9513092e-d617-11f0-9d83-3a75c33b0089\") on node \"addons-713277\" "
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.712467    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4sf7f\" (UniqueName: \"kubernetes.io/projected/508c2fd7-d5ee-4be4-8ea0-db1f0360438b-kube-api-access-4sf7f\") on node \"addons-713277\" DevicePath \"\""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.717283    1285 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-005d811b-03c2-4c30-808a-b450672398d2" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9513092e-d617-11f0-9d83-3a75c33b0089") on node "addons-713277"
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.813611    1285 reconciler_common.go:299] "Volume detached for volume \"pvc-005d811b-03c2-4c30-808a-b450672398d2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9513092e-d617-11f0-9d83-3a75c33b0089\") on node \"addons-713277\" DevicePath \"\""
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.978106    1285 scope.go:117] "RemoveContainer" containerID="29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802"
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.988026    1285 scope.go:117] "RemoveContainer" containerID="29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802"
	Dec 10 22:28:50 addons-713277 kubelet[1285]: E1210 22:28:50.988432    1285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802\": container with ID starting with 29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802 not found: ID does not exist" containerID="29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802"
	Dec 10 22:28:50 addons-713277 kubelet[1285]: I1210 22:28:50.988477    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802"} err="failed to get container status \"29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802\": rpc error: code = NotFound desc = could not find container \"29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802\": container with ID starting with 29c4224077419fa939ac3624a252ae2e6200dd531e2082dc2a735b4a40a14802 not found: ID does not exist"
	Dec 10 22:28:51 addons-713277 kubelet[1285]: I1210 22:28:51.415236    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508c2fd7-d5ee-4be4-8ea0-db1f0360438b" path="/var/lib/kubelet/pods/508c2fd7-d5ee-4be4-8ea0-db1f0360438b/volumes"
	Dec 10 22:28:54 addons-713277 kubelet[1285]: I1210 22:28:54.412077    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tlfbx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:29:49 addons-713277 kubelet[1285]: I1210 22:29:49.411903    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9zlkh" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:29:56 addons-713277 kubelet[1285]: I1210 22:29:56.411805    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tlfbx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:30:11 addons-713277 kubelet[1285]: I1210 22:30:11.411928    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xz7l5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:30:34 addons-713277 kubelet[1285]: I1210 22:30:34.184264    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a47506e0-aff6-4a7d-9215-b65f72a42c2a-gcp-creds\") pod \"hello-world-app-5d498dc89-zkdbf\" (UID: \"a47506e0-aff6-4a7d-9215-b65f72a42c2a\") " pod="default/hello-world-app-5d498dc89-zkdbf"
	Dec 10 22:30:34 addons-713277 kubelet[1285]: I1210 22:30:34.184372    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j45x\" (UniqueName: \"kubernetes.io/projected/a47506e0-aff6-4a7d-9215-b65f72a42c2a-kube-api-access-6j45x\") pod \"hello-world-app-5d498dc89-zkdbf\" (UID: \"a47506e0-aff6-4a7d-9215-b65f72a42c2a\") " pod="default/hello-world-app-5d498dc89-zkdbf"
	Dec 10 22:30:35 addons-713277 kubelet[1285]: I1210 22:30:35.359129    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-zkdbf" podStartSLOduration=0.897481346 podStartE2EDuration="1.359108004s" podCreationTimestamp="2025-12-10 22:30:34 +0000 UTC" firstStartedPulling="2025-12-10 22:30:34.421767205 +0000 UTC m=+247.088470360" lastFinishedPulling="2025-12-10 22:30:34.883393864 +0000 UTC m=+247.550097018" observedRunningTime="2025-12-10 22:30:35.358514685 +0000 UTC m=+248.025217861" watchObservedRunningTime="2025-12-10 22:30:35.359108004 +0000 UTC m=+248.025811180"
	
	
	==> storage-provisioner [1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508] <==
	W1210 22:30:11.460424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:13.463260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:13.467021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:15.470668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:15.475253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:17.479869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:17.485296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:19.488222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:19.492554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:21.495402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:21.499782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:23.502722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:23.506900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:25.510109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:25.513970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:27.517188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:27.522279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:29.525392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:29.529043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:31.532054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:31.537567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:33.540183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:33.544893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:35.548496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:30:35.553776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-713277 -n addons-713277
helpers_test.go:270: (dbg) Run:  kubectl --context addons-713277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s: exit status 1 (56.72167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8t8lk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5hp7s" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (243.822415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:30:36.697989   24697 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:30:36.698289   24697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:30:36.698298   24697 out.go:374] Setting ErrFile to fd 2...
	I1210 22:30:36.698303   24697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:30:36.698511   24697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:30:36.698795   24697 mustload.go:66] Loading cluster: addons-713277
	I1210 22:30:36.699119   24697 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:30:36.699132   24697 addons.go:622] checking whether the cluster is paused
	I1210 22:30:36.699207   24697 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:30:36.699218   24697 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:30:36.699690   24697 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:30:36.717785   24697 ssh_runner.go:195] Run: systemctl --version
	I1210 22:30:36.717856   24697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:30:36.735772   24697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:30:36.831632   24697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:30:36.831722   24697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:30:36.861065   24697 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:30:36.861088   24697 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:30:36.861094   24697 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:30:36.861099   24697 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:30:36.861104   24697 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:30:36.861109   24697 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:30:36.861114   24697 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:30:36.861128   24697 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:30:36.861131   24697 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:30:36.861136   24697 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:30:36.861139   24697 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:30:36.861142   24697 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:30:36.861145   24697 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:30:36.861148   24697 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:30:36.861151   24697 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:30:36.861167   24697 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:30:36.861175   24697 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:30:36.861178   24697 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:30:36.861181   24697 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:30:36.861184   24697 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:30:36.861187   24697 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:30:36.861189   24697 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:30:36.861192   24697 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:30:36.861194   24697 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:30:36.861197   24697 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:30:36.861199   24697 cri.go:89] found id: ""
	I1210 22:30:36.861236   24697 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:30:36.876098   24697 out.go:203] 
	W1210 22:30:36.877473   24697 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:30:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:30:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:30:36.877493   24697 out.go:285] * 
	* 
	W1210 22:30:36.880431   24697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:30:36.881723   24697 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable ingress --alsologtostderr -v=1: exit status 11 (243.483195ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:30:36.943747   24756 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:30:36.944013   24756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:30:36.944021   24756 out.go:374] Setting ErrFile to fd 2...
	I1210 22:30:36.944026   24756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:30:36.944242   24756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:30:36.944498   24756 mustload.go:66] Loading cluster: addons-713277
	I1210 22:30:36.944823   24756 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:30:36.944838   24756 addons.go:622] checking whether the cluster is paused
	I1210 22:30:36.944919   24756 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:30:36.944930   24756 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:30:36.945306   24756 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:30:36.963486   24756 ssh_runner.go:195] Run: systemctl --version
	I1210 22:30:36.963563   24756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:30:36.982216   24756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:30:37.077139   24756 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:30:37.077203   24756 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:30:37.106064   24756 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:30:37.106099   24756 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:30:37.106104   24756 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:30:37.106110   24756 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:30:37.106114   24756 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:30:37.106119   24756 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:30:37.106124   24756 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:30:37.106129   24756 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:30:37.106133   24756 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:30:37.106145   24756 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:30:37.106154   24756 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:30:37.106158   24756 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:30:37.106162   24756 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:30:37.106172   24756 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:30:37.106176   24756 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:30:37.106187   24756 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:30:37.106194   24756 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:30:37.106201   24756 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:30:37.106205   24756 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:30:37.106210   24756 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:30:37.106214   24756 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:30:37.106221   24756 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:30:37.106226   24756 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:30:37.106234   24756 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:30:37.106237   24756 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:30:37.106240   24756 cri.go:89] found id: ""
	I1210 22:30:37.106284   24756 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:30:37.120381   24756 out.go:203] 
	W1210 22:30:37.121965   24756 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:30:37.121982   24756 out.go:285] * 
	* 
	W1210 22:30:37.124880   24756 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:30:37.126253   24756 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-9zvtj" [08502007-54ab-4879-b85d-eebbbb651a18] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00398994s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (237.329275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:11.265345   20205 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:11.265653   20205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:11.265664   20205 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:11.265670   20205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:11.265862   20205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:11.266142   20205 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:11.266484   20205 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:11.266503   20205 addons.go:622] checking whether the cluster is paused
	I1210 22:28:11.266606   20205 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:11.266621   20205 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:11.267050   20205 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:11.284515   20205 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:11.284564   20205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:11.302327   20205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:11.397168   20205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:11.397236   20205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:11.425035   20205 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:11.425062   20205 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:11.425066   20205 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:11.425069   20205 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:11.425072   20205 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:11.425076   20205 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:11.425078   20205 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:11.425081   20205 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:11.425083   20205 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:11.425092   20205 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:11.425094   20205 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:11.425097   20205 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:11.425100   20205 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:11.425104   20205 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:11.425108   20205 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:11.425121   20205 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:11.425128   20205 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:11.425133   20205 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:11.425135   20205 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:11.425138   20205 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:11.425141   20205 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:11.425143   20205 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:11.425146   20205 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:11.425148   20205 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:11.425151   20205 cri.go:89] found id: ""
	I1210 22:28:11.425190   20205 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:11.440610   20205 out.go:203] 
	W1210 22:28:11.441871   20205 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:11.441890   20205 out.go:285] * 
	* 
	W1210 22:28:11.444827   20205 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:11.446208   20205 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.820022ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002223926s
addons_test.go:465: (dbg) Run:  kubectl --context addons-713277 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (258.292554ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:08.836244   19483 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:08.836365   19483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:08.836374   19483 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:08.836378   19483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:08.836582   19483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:08.836874   19483 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:08.837204   19483 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:08.837218   19483 addons.go:622] checking whether the cluster is paused
	I1210 22:28:08.837302   19483 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:08.837313   19483 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:08.837734   19483 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:08.856475   19483 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:08.856523   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:08.874206   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:08.969622   19483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:08.969726   19483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:09.001403   19483 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:09.001438   19483 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:09.001444   19483 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:09.001448   19483 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:09.001452   19483 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:09.001458   19483 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:09.001461   19483 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:09.001466   19483 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:09.001470   19483 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:09.001478   19483 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:09.001483   19483 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:09.001488   19483 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:09.001500   19483 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:09.001506   19483 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:09.001515   19483 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:09.001525   19483 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:09.001532   19483 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:09.001538   19483 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:09.001543   19483 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:09.001546   19483 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:09.001565   19483 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:09.001570   19483 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:09.001574   19483 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:09.001591   19483 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:09.001595   19483 cri.go:89] found id: ""
	I1210 22:28:09.001665   19483 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:09.017188   19483 out.go:203] 
	W1210 22:28:09.018442   19483 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:09.018469   19483 out.go:285] * 
	* 
	W1210 22:28:09.023062   19483 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:09.028803   19483 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 22:28:09.038396    8660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 22:28:09.041387    8660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 22:28:09.041409    8660 kapi.go:107] duration metric: took 3.029329ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.036966ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-713277 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-713277 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [9452d846-8795-4d8e-9825-204236247ed3] Pending
helpers_test.go:353: "task-pv-pod" [9452d846-8795-4d8e-9825-204236247ed3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003357351s
addons_test.go:574: (dbg) Run:  kubectl --context addons-713277 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-713277 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-713277 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-713277 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-713277 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-713277 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-713277 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [508c2fd7-d5ee-4be4-8ea0-db1f0360438b] Pending
helpers_test.go:353: "task-pv-pod-restore" [508c2fd7-d5ee-4be4-8ea0-db1f0360438b] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.002882321s
addons_test.go:616: (dbg) Run:  kubectl --context addons-713277 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-713277 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-713277 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (243.357411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:51.380389   22584 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:51.380544   22584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:51.380555   22584 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:51.380558   22584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:51.380762   22584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:51.381041   22584 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:51.381407   22584 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:51.381421   22584 addons.go:622] checking whether the cluster is paused
	I1210 22:28:51.381504   22584 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:51.381518   22584 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:51.381934   22584 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:51.400621   22584 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:51.400704   22584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:51.418330   22584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:51.513599   22584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:51.513713   22584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:51.544028   22584 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:28:51.544057   22584 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:51.544061   22584 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:51.544064   22584 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:51.544067   22584 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:51.544071   22584 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:51.544076   22584 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:51.544078   22584 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:51.544081   22584 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:51.544086   22584 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:51.544091   22584 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:51.544095   22584 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:51.544106   22584 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:51.544109   22584 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:51.544112   22584 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:51.544119   22584 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:51.544122   22584 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:51.544126   22584 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:51.544129   22584 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:51.544131   22584 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:51.544137   22584 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:51.544139   22584 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:51.544142   22584 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:51.544145   22584 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:51.544148   22584 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:51.544151   22584 cri.go:89] found id: ""
	I1210 22:28:51.544189   22584 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:51.559043   22584 out.go:203] 
	W1210 22:28:51.560551   22584 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:51.560572   22584 out.go:285] * 
	* 
	W1210 22:28:51.563553   22584 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:51.564977   22584 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.94575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:51.625576   22645 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:51.625722   22645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:51.625731   22645 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:51.625735   22645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:51.625909   22645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:51.626191   22645 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:51.626538   22645 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:51.626552   22645 addons.go:622] checking whether the cluster is paused
	I1210 22:28:51.626636   22645 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:51.626665   22645 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:51.627130   22645 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:51.644988   22645 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:51.645039   22645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:51.662908   22645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:51.759442   22645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:51.759542   22645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:51.790474   22645 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:28:51.790501   22645 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:51.790508   22645 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:51.790513   22645 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:51.790517   22645 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:51.790521   22645 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:51.790525   22645 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:51.790529   22645 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:51.790534   22645 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:51.790543   22645 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:51.790547   22645 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:51.790551   22645 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:51.790556   22645 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:51.790566   22645 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:51.790572   22645 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:51.790588   22645 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:51.790595   22645 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:51.790600   22645 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:51.790603   22645 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:51.790606   22645 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:51.790611   22645 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:51.790613   22645 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:51.790616   22645 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:51.790619   22645 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:51.790622   22645 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:51.790624   22645 cri.go:89] found id: ""
	I1210 22:28:51.790697   22645 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:51.804797   22645 out.go:203] 
	W1210 22:28:51.806085   22645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:51.806102   22645 out.go:285] * 
	* 
	W1210 22:28:51.808997   22645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:51.810385   22645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-713277 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-713277 --alsologtostderr -v=1: exit status 11 (240.917185ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:03.767947   18661 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:03.768381   18661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:03.768392   18661 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:03.768398   18661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:03.768673   18661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:03.768963   18661 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:03.769342   18661 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:03.769361   18661 addons.go:622] checking whether the cluster is paused
	I1210 22:28:03.769472   18661 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:03.769490   18661 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:03.770130   18661 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:03.787970   18661 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:03.788024   18661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:03.805426   18661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:03.899375   18661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:03.899453   18661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:03.927895   18661 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:03.927913   18661 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:03.927916   18661 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:03.927920   18661 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:03.927923   18661 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:03.927926   18661 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:03.927929   18661 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:03.927931   18661 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:03.927934   18661 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:03.927939   18661 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:03.927942   18661 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:03.927944   18661 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:03.927949   18661 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:03.927952   18661 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:03.927955   18661 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:03.927961   18661 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:03.927965   18661 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:03.927969   18661 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:03.927972   18661 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:03.927975   18661 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:03.927978   18661 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:03.927980   18661 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:03.927983   18661 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:03.927985   18661 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:03.927988   18661 cri.go:89] found id: ""
	I1210 22:28:03.928026   18661 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:03.941927   18661 out.go:203] 
	W1210 22:28:03.943332   18661 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:03.943357   18661 out.go:285] * 
	* 
	W1210 22:28:03.946249   18661 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:03.947665   18661 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-713277 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-713277
helpers_test.go:244: (dbg) docker inspect addons-713277:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500",
	        "Created": "2025-12-10T22:26:15.572898264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T22:26:15.616590435Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/hostname",
	        "HostsPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/hosts",
	        "LogPath": "/var/lib/docker/containers/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500/df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500-json.log",
	        "Name": "/addons-713277",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-713277:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-713277",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df9731acd91bd26f779bb7a368672797b8d1637d7a69b0be7df52f6c6203d500",
	                "LowerDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d96ccc5d623c916cace1f8eda690149b6710e9dab000a42f1ca46fb31a82e6ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-713277",
	                "Source": "/var/lib/docker/volumes/addons-713277/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-713277",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-713277",
	                "name.minikube.sigs.k8s.io": "addons-713277",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e51c06d9b3b98d4fb9a4f1fd695018face312d4f7b89056e8352b7cf2797c772",
	            "SandboxKey": "/var/run/docker/netns/e51c06d9b3b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-713277": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68f994aacdfe48dfffec610a926ba1df2096191c6ae50bc5d7210533d5089584",
	                    "EndpointID": "2aa2a0bf0d9f61ec121d5d3a005cc507ea9f3d50dda319dd7df4184695365669",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "de:ac:fa:1e:f2:6c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-713277",
	                        "df9731acd91b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-713277 -n addons-713277
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-713277 logs -n 25: (1.120708683s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-751103   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-751103                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-751103   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ -o=json --download-only -p download-only-488286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-488286   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-488286                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-488286   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ -o=json --download-only -p download-only-033871 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-033871   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-033871                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-033871   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-751103                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-751103   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-488286                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-488286   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-033871                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-033871   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ --download-only -p download-docker-950186 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-950186 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ -p download-docker-950186                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-950186 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ --download-only -p binary-mirror-479778 --alsologtostderr --binary-mirror http://127.0.0.1:46291 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-479778   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ -p binary-mirror-479778                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-479778   │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ addons  │ enable dashboard -p addons-713277                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-713277                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ start   │ -p addons-713277 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:27 UTC │
	│ addons  │ addons-713277 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:27 UTC │                     │
	│ addons  │ addons-713277 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-713277 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-713277          │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:52.425347   10419 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:52.425618   10419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:52.425629   10419 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:52.425636   10419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:52.425870   10419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:25:52.426424   10419 out.go:368] Setting JSON to false
	I1210 22:25:52.427255   10419 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":494,"bootTime":1765405058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:52.427310   10419 start.go:143] virtualization: kvm guest
	I1210 22:25:52.429346   10419 out.go:179] * [addons-713277] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:25:52.430993   10419 notify.go:221] Checking for updates...
	I1210 22:25:52.431047   10419 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:25:52.432570   10419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:52.434025   10419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:25:52.435560   10419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:25:52.436856   10419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:25:52.438124   10419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:25:52.439601   10419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:52.464067   10419 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:25:52.464229   10419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:52.517830   10419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 22:25:52.508562141 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:52.517927   10419 docker.go:319] overlay module found
	I1210 22:25:52.519955   10419 out.go:179] * Using the docker driver based on user configuration
	I1210 22:25:52.521196   10419 start.go:309] selected driver: docker
	I1210 22:25:52.521213   10419 start.go:927] validating driver "docker" against <nil>
	I1210 22:25:52.521232   10419 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:25:52.521948   10419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:52.575958   10419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 22:25:52.566549943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:52.576142   10419 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:52.576384   10419 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:25:52.578116   10419 out.go:179] * Using Docker driver with root privileges
	I1210 22:25:52.579398   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:25:52.579459   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:25:52.579469   10419 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 22:25:52.579531   10419 start.go:353] cluster config:
	{Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:25:52.581231   10419 out.go:179] * Starting "addons-713277" primary control-plane node in "addons-713277" cluster
	I1210 22:25:52.582561   10419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 22:25:52.583787   10419 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 22:25:52.584973   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:25:52.585011   10419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 22:25:52.585021   10419 cache.go:65] Caching tarball of preloaded images
	I1210 22:25:52.585091   10419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 22:25:52.585113   10419 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 22:25:52.585125   10419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 22:25:52.585600   10419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json ...
	I1210 22:25:52.585626   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json: {Name:mk8319a125c2c8127427cf1b33cd61c4fd701213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:25:52.601544   10419 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 22:25:52.601686   10419 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 22:25:52.601712   10419 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 22:25:52.601719   10419 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 22:25:52.601730   10419 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 22:25:52.601740   10419 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1210 22:26:05.198371   10419 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 22:26:05.198406   10419 cache.go:243] Successfully downloaded all kic artifacts
	I1210 22:26:05.198444   10419 start.go:360] acquireMachinesLock for addons-713277: {Name:mkedaedeb4d270ce44212898da8a4cf27fda7401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:26:05.198537   10419 start.go:364] duration metric: took 76.687µs to acquireMachinesLock for "addons-713277"
	I1210 22:26:05.198560   10419 start.go:93] Provisioning new machine with config: &{Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:26:05.198634   10419 start.go:125] createHost starting for "" (driver="docker")
	I1210 22:26:05.200380   10419 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 22:26:05.200579   10419 start.go:159] libmachine.API.Create for "addons-713277" (driver="docker")
	I1210 22:26:05.200608   10419 client.go:173] LocalClient.Create starting
	I1210 22:26:05.200720   10419 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 22:26:05.229553   10419 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 22:26:05.383877   10419 cli_runner.go:164] Run: docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 22:26:05.401516   10419 cli_runner.go:211] docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 22:26:05.401592   10419 network_create.go:284] running [docker network inspect addons-713277] to gather additional debugging logs...
	I1210 22:26:05.401616   10419 cli_runner.go:164] Run: docker network inspect addons-713277
	W1210 22:26:05.417893   10419 cli_runner.go:211] docker network inspect addons-713277 returned with exit code 1
	I1210 22:26:05.417923   10419 network_create.go:287] error running [docker network inspect addons-713277]: docker network inspect addons-713277: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-713277 not found
	I1210 22:26:05.417937   10419 network_create.go:289] output of [docker network inspect addons-713277]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-713277 not found
	
	** /stderr **
	I1210 22:26:05.418040   10419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 22:26:05.435445   10419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10ef0}
	I1210 22:26:05.435492   10419 network_create.go:124] attempt to create docker network addons-713277 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 22:26:05.435532   10419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-713277 addons-713277
	I1210 22:26:05.482520   10419 network_create.go:108] docker network addons-713277 192.168.49.0/24 created
	I1210 22:26:05.482546   10419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-713277" container
	I1210 22:26:05.482607   10419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 22:26:05.500030   10419 cli_runner.go:164] Run: docker volume create addons-713277 --label name.minikube.sigs.k8s.io=addons-713277 --label created_by.minikube.sigs.k8s.io=true
	I1210 22:26:05.517311   10419 oci.go:103] Successfully created a docker volume addons-713277
	I1210 22:26:05.517378   10419 cli_runner.go:164] Run: docker run --rm --name addons-713277-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --entrypoint /usr/bin/test -v addons-713277:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 22:26:11.714265   10419 cli_runner.go:217] Completed: docker run --rm --name addons-713277-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --entrypoint /usr/bin/test -v addons-713277:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.19685051s)
	I1210 22:26:11.714303   10419 oci.go:107] Successfully prepared a docker volume addons-713277
	I1210 22:26:11.714361   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:11.714375   10419 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 22:26:11.714425   10419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-713277:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 22:26:15.504108   10419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-713277:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.789639473s)
	I1210 22:26:15.504136   10419 kic.go:203] duration metric: took 3.789756995s to extract preloaded images to volume ...
	W1210 22:26:15.504233   10419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 22:26:15.504282   10419 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 22:26:15.504322   10419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 22:26:15.556938   10419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-713277 --name addons-713277 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-713277 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-713277 --network addons-713277 --ip 192.168.49.2 --volume addons-713277:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 22:26:15.864814   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Running}}
	I1210 22:26:15.883763   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:15.904331   10419 cli_runner.go:164] Run: docker exec addons-713277 stat /var/lib/dpkg/alternatives/iptables
	I1210 22:26:15.949808   10419 oci.go:144] the created container "addons-713277" has a running status.
	I1210 22:26:15.949839   10419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa...
	I1210 22:26:16.052971   10419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 22:26:16.079611   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:16.101589   10419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 22:26:16.101630   10419 kic_runner.go:114] Args: [docker exec --privileged addons-713277 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 22:26:16.144593   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:16.170501   10419 machine.go:94] provisionDockerMachine start ...
	I1210 22:26:16.170610   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.192598   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.192842   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.192861   10419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 22:26:16.333499   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-713277
	
	I1210 22:26:16.333526   10419 ubuntu.go:182] provisioning hostname "addons-713277"
	I1210 22:26:16.333588   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.352281   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.352567   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.352588   10419 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-713277 && echo "addons-713277" | sudo tee /etc/hostname
	I1210 22:26:16.496639   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-713277
	
	I1210 22:26:16.496736   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.516022   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.516236   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.516252   10419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-713277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-713277/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-713277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 22:26:16.648348   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 22:26:16.648373   10419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 22:26:16.648393   10419 ubuntu.go:190] setting up certificates
	I1210 22:26:16.648406   10419 provision.go:84] configureAuth start
	I1210 22:26:16.648459   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:16.665859   10419 provision.go:143] copyHostCerts
	I1210 22:26:16.665928   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 22:26:16.666100   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 22:26:16.666178   10419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 22:26:16.666232   10419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.addons-713277 san=[127.0.0.1 192.168.49.2 addons-713277 localhost minikube]
	I1210 22:26:16.710469   10419 provision.go:177] copyRemoteCerts
	I1210 22:26:16.710544   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 22:26:16.710581   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.727832   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:16.824146   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 22:26:16.843597   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 22:26:16.860761   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 22:26:16.877618   10419 provision.go:87] duration metric: took 229.197414ms to configureAuth
	I1210 22:26:16.877665   10419 ubuntu.go:206] setting minikube options for container-runtime
	I1210 22:26:16.877824   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:16.877919   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:16.895383   10419 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:16.895628   10419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 22:26:16.895667   10419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 22:26:17.167347   10419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 22:26:17.167370   10419 machine.go:97] duration metric: took 996.846411ms to provisionDockerMachine
	I1210 22:26:17.167380   10419 client.go:176] duration metric: took 11.966764439s to LocalClient.Create
	I1210 22:26:17.167393   10419 start.go:167] duration metric: took 11.966813404s to libmachine.API.Create "addons-713277"
	I1210 22:26:17.167402   10419 start.go:293] postStartSetup for "addons-713277" (driver="docker")
	I1210 22:26:17.167414   10419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 22:26:17.167468   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 22:26:17.167500   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.185050   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.282726   10419 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 22:26:17.286271   10419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 22:26:17.286303   10419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 22:26:17.286313   10419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 22:26:17.286378   10419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 22:26:17.286401   10419 start.go:296] duration metric: took 118.993729ms for postStartSetup
	I1210 22:26:17.286686   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:17.303579   10419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/config.json ...
	I1210 22:26:17.303876   10419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:26:17.303918   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.322057   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.414665   10419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 22:26:17.419214   10419 start.go:128] duration metric: took 12.220566574s to createHost
	I1210 22:26:17.419239   10419 start.go:83] releasing machines lock for "addons-713277", held for 12.220689836s
	I1210 22:26:17.419306   10419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-713277
	I1210 22:26:17.436962   10419 ssh_runner.go:195] Run: cat /version.json
	I1210 22:26:17.437011   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.437044   10419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 22:26:17.437148   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:17.455922   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.456535   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:17.600629   10419 ssh_runner.go:195] Run: systemctl --version
	I1210 22:26:17.606989   10419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 22:26:17.640746   10419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 22:26:17.645596   10419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 22:26:17.645671   10419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 22:26:17.671387   10419 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 22:26:17.671415   10419 start.go:496] detecting cgroup driver to use...
	I1210 22:26:17.671452   10419 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 22:26:17.671494   10419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 22:26:17.687165   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 22:26:17.698836   10419 docker.go:218] disabling cri-docker service (if available) ...
	I1210 22:26:17.698887   10419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 22:26:17.714851   10419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 22:26:17.732316   10419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 22:26:17.813688   10419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 22:26:17.902576   10419 docker.go:234] disabling docker service ...
	I1210 22:26:17.902633   10419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 22:26:17.920838   10419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 22:26:17.932986   10419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 22:26:18.010146   10419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 22:26:18.087243   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 22:26:18.099198   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 22:26:18.112743   10419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 22:26:18.112791   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.123242   10419 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 22:26:18.123300   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.132264   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.140869   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.149311   10419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 22:26:18.156986   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.164968   10419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.177899   10419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:18.186423   10419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 22:26:18.193425   10419 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 22:26:18.193468   10419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 22:26:18.205335   10419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 22:26:18.212718   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:18.289834   10419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 22:26:18.412720   10419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 22:26:18.412801   10419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 22:26:18.416727   10419 start.go:564] Will wait 60s for crictl version
	I1210 22:26:18.416784   10419 ssh_runner.go:195] Run: which crictl
	I1210 22:26:18.420425   10419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 22:26:18.445488   10419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 22:26:18.445601   10419 ssh_runner.go:195] Run: crio --version
	I1210 22:26:18.472482   10419 ssh_runner.go:195] Run: crio --version
	I1210 22:26:18.501975   10419 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 22:26:18.503362   10419 cli_runner.go:164] Run: docker network inspect addons-713277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 22:26:18.520622   10419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 22:26:18.524800   10419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:18.534947   10419 kubeadm.go:884] updating cluster {Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 22:26:18.535053   10419 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:18.535099   10419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:18.567183   10419 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 22:26:18.567203   10419 crio.go:433] Images already preloaded, skipping extraction
	I1210 22:26:18.567244   10419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:18.591688   10419 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 22:26:18.591709   10419 cache_images.go:86] Images are preloaded, skipping loading
	I1210 22:26:18.591716   10419 kubeadm.go:935] updating node { 192.168.49.2  8443 v1.34.2 crio true true} ...
	I1210 22:26:18.591809   10419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-713277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 22:26:18.591871   10419 ssh_runner.go:195] Run: crio config
	I1210 22:26:18.634659   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:26:18.634686   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:26:18.634707   10419 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 22:26:18.634740   10419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-713277 NodeName:addons-713277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 22:26:18.634865   10419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-713277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 22:26:18.634944   10419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 22:26:18.642986   10419 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 22:26:18.643044   10419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 22:26:18.650936   10419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 22:26:18.662949   10419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 22:26:18.677549   10419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 22:26:18.689823   10419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 22:26:18.693330   10419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:18.703193   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:18.782490   10419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:26:18.808038   10419 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277 for IP: 192.168.49.2
	I1210 22:26:18.808061   10419 certs.go:195] generating shared ca certs ...
	I1210 22:26:18.808080   10419 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.808241   10419 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 22:26:18.929357   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt ...
	I1210 22:26:18.929388   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt: {Name:mkbd30d3b4f4ba5b83e216c0671eb91421516806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.929584   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key ...
	I1210 22:26:18.929596   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key: {Name:mk81df2d47aaadeaa0810edca18da86636f14941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.929720   10419 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 22:26:18.960945   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt ...
	I1210 22:26:18.960981   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt: {Name:mk2f7a78b774462d65488c066902bd3b0099fa43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.961122   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key ...
	I1210 22:26:18.961138   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key: {Name:mkdc055b7f3370167cae79e8d6f08805a0012de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:18.961202   10419 certs.go:257] generating profile certs ...
	I1210 22:26:18.961253   10419 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key
	I1210 22:26:18.961266   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt with IP's: []
	I1210 22:26:19.103303   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt ...
	I1210 22:26:19.103330   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: {Name:mk733fd208879e3efff97dfc66c558c69ea74288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.103492   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key ...
	I1210 22:26:19.103503   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.key: {Name:mk76c5b17ff6c8180ef7f8e7d7d0b263573cf628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.103562   10419 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187
	I1210 22:26:19.103580   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 22:26:19.156810   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 ...
	I1210 22:26:19.156838   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187: {Name:mk7e413d79e4f129c2878b25cf1750e72f209dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.156984   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187 ...
	I1210 22:26:19.156997   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187: {Name:mkcce4a4fcc035feffe0502e113eaaf30a4baa10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.157063   10419 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt.e98e3187 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt
	I1210 22:26:19.157147   10419 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key.e98e3187 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key
	I1210 22:26:19.157197   10419 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key
	I1210 22:26:19.157216   10419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt with IP's: []
	I1210 22:26:19.322243   10419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt ...
	I1210 22:26:19.322274   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt: {Name:mk31c1cddd242b12293e2e5d6f788ae2f5bfa861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.322437   10419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key ...
	I1210 22:26:19.322448   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key: {Name:mk1991af609f07d13443419da545d786acfe061b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:19.322678   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 22:26:19.322718   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 22:26:19.322746   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 22:26:19.322774   10419 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 22:26:19.323320   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 22:26:19.341374   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 22:26:19.358896   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 22:26:19.376131   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 22:26:19.393105   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 22:26:19.409785   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 22:26:19.427202   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 22:26:19.444354   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 22:26:19.461701   10419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 22:26:19.480844   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 22:26:19.493605   10419 ssh_runner.go:195] Run: openssl version
	I1210 22:26:19.499867   10419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.507418   10419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 22:26:19.517335   10419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.521031   10419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.521077   10419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:19.554819   10419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 22:26:19.562368   10419 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 22:26:19.569855   10419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 22:26:19.573384   10419 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 22:26:19.573453   10419 kubeadm.go:401] StartCluster: {Name:addons-713277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-713277 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:19.573522   10419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:26:19.573572   10419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:26:19.600554   10419 cri.go:89] found id: ""
	I1210 22:26:19.600622   10419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 22:26:19.608784   10419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 22:26:19.616864   10419 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 22:26:19.616925   10419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 22:26:19.624342   10419 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 22:26:19.624357   10419 kubeadm.go:158] found existing configuration files:
	
	I1210 22:26:19.624396   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 22:26:19.631550   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 22:26:19.631603   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 22:26:19.638586   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 22:26:19.645994   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 22:26:19.646041   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 22:26:19.653332   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 22:26:19.661148   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 22:26:19.661202   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 22:26:19.668325   10419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 22:26:19.675848   10419 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 22:26:19.675901   10419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 22:26:19.683531   10419 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 22:26:19.721889   10419 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 22:26:19.721954   10419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 22:26:19.741420   10419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 22:26:19.741482   10419 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 22:26:19.741521   10419 kubeadm.go:319] OS: Linux
	I1210 22:26:19.741578   10419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 22:26:19.741668   10419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 22:26:19.741742   10419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 22:26:19.741840   10419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 22:26:19.741930   10419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 22:26:19.742019   10419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 22:26:19.742109   10419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 22:26:19.742173   10419 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 22:26:19.797102   10419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 22:26:19.797238   10419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 22:26:19.797385   10419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 22:26:19.804102   10419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 22:26:19.807767   10419 out.go:252]   - Generating certificates and keys ...
	I1210 22:26:19.807878   10419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 22:26:19.807974   10419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 22:26:20.052789   10419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 22:26:20.454343   10419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 22:26:20.710808   10419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 22:26:20.790906   10419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 22:26:21.026521   10419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 22:26:21.026637   10419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-713277 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 22:26:21.120054   10419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 22:26:21.120200   10419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-713277 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 22:26:21.358275   10419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 22:26:21.513225   10419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 22:26:21.668168   10419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 22:26:21.668244   10419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 22:26:21.901278   10419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 22:26:22.093038   10419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 22:26:22.113948   10419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 22:26:22.327098   10419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 22:26:22.641040   10419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 22:26:22.641512   10419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 22:26:22.645178   10419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 22:26:22.646753   10419 out.go:252]   - Booting up control plane ...
	I1210 22:26:22.646867   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 22:26:22.646962   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 22:26:22.647474   10419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 22:26:22.673197   10419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 22:26:22.673365   10419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 22:26:22.679806   10419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 22:26:22.679993   10419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 22:26:22.680066   10419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 22:26:22.771794   10419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 22:26:22.771954   10419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 22:26:23.272356   10419 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.033849ms
	I1210 22:26:23.275083   10419 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 22:26:23.275213   10419 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 22:26:23.275338   10419 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 22:26:23.275475   10419 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 22:26:24.669547   10419 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.394397155s
	I1210 22:26:25.858206   10419 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.583207862s
	I1210 22:26:26.777217   10419 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502134857s
	I1210 22:26:26.793794   10419 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 22:26:26.802972   10419 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 22:26:26.811755   10419 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 22:26:26.811927   10419 kubeadm.go:319] [mark-control-plane] Marking the node addons-713277 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 22:26:26.819519   10419 kubeadm.go:319] [bootstrap-token] Using token: 54ecna.64azq7impk1jbwgg
	I1210 22:26:26.821536   10419 out.go:252]   - Configuring RBAC rules ...
	I1210 22:26:26.821692   10419 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 22:26:26.824622   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 22:26:26.829510   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 22:26:26.831792   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 22:26:26.835022   10419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 22:26:26.837207   10419 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 22:26:27.182782   10419 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 22:26:27.596560   10419 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 22:26:28.182900   10419 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 22:26:28.183774   10419 kubeadm.go:319] 
	I1210 22:26:28.183838   10419 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 22:26:28.183862   10419 kubeadm.go:319] 
	I1210 22:26:28.183955   10419 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 22:26:28.183963   10419 kubeadm.go:319] 
	I1210 22:26:28.183986   10419 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 22:26:28.184055   10419 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 22:26:28.184166   10419 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 22:26:28.184178   10419 kubeadm.go:319] 
	I1210 22:26:28.184244   10419 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 22:26:28.184253   10419 kubeadm.go:319] 
	I1210 22:26:28.184318   10419 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 22:26:28.184342   10419 kubeadm.go:319] 
	I1210 22:26:28.184434   10419 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 22:26:28.184545   10419 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 22:26:28.184636   10419 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 22:26:28.184676   10419 kubeadm.go:319] 
	I1210 22:26:28.184776   10419 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 22:26:28.184864   10419 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 22:26:28.184873   10419 kubeadm.go:319] 
	I1210 22:26:28.184991   10419 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 54ecna.64azq7impk1jbwgg \
	I1210 22:26:28.185122   10419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 22:26:28.185164   10419 kubeadm.go:319] 	--control-plane 
	I1210 22:26:28.185176   10419 kubeadm.go:319] 
	I1210 22:26:28.185248   10419 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 22:26:28.185254   10419 kubeadm.go:319] 
	I1210 22:26:28.185323   10419 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 54ecna.64azq7impk1jbwgg \
	I1210 22:26:28.185415   10419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 22:26:28.187470   10419 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 22:26:28.187617   10419 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 22:26:28.187669   10419 cni.go:84] Creating CNI manager for ""
	I1210 22:26:28.187683   10419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 22:26:28.189573   10419 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 22:26:28.190835   10419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 22:26:28.194973   10419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 22:26:28.194988   10419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 22:26:28.209273   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 22:26:28.414382   10419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 22:26:28.414448   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:28.414519   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-713277 minikube.k8s.io/updated_at=2025_12_10T22_26_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-713277 minikube.k8s.io/primary=true
	I1210 22:26:28.424007   10419 ops.go:34] apiserver oom_adj: -16
	I1210 22:26:28.489317   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:28.990150   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:29.489976   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:29.989447   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:30.489522   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:30.990356   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:31.490330   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:31.989977   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:32.490035   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:32.989399   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:33.489372   10419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:26:33.553166   10419 kubeadm.go:1114] duration metric: took 5.1387664s to wait for elevateKubeSystemPrivileges
	I1210 22:26:33.553216   10419 kubeadm.go:403] duration metric: took 13.979775684s to StartCluster
	I1210 22:26:33.553240   10419 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:33.553362   10419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:26:33.553783   10419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:33.553949   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 22:26:33.553975   10419 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:26:33.554037   10419 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 22:26:33.554148   10419 addons.go:70] Setting yakd=true in profile "addons-713277"
	I1210 22:26:33.554177   10419 addons.go:70] Setting inspektor-gadget=true in profile "addons-713277"
	I1210 22:26:33.554182   10419 addons.go:239] Setting addon yakd=true in "addons-713277"
	I1210 22:26:33.554198   10419 addons.go:239] Setting addon inspektor-gadget=true in "addons-713277"
	I1210 22:26:33.554204   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:33.554221   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554228   10419 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-713277"
	I1210 22:26:33.554241   10419 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-713277"
	I1210 22:26:33.554254   10419 addons.go:70] Setting cloud-spanner=true in profile "addons-713277"
	I1210 22:26:33.554264   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554273   10419 addons.go:239] Setting addon cloud-spanner=true in "addons-713277"
	I1210 22:26:33.554265   10419 addons.go:70] Setting default-storageclass=true in profile "addons-713277"
	I1210 22:26:33.554292   10419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-713277"
	I1210 22:26:33.554290   10419 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-713277"
	I1210 22:26:33.554308   10419 addons.go:70] Setting ingress=true in profile "addons-713277"
	I1210 22:26:33.554332   10419 addons.go:239] Setting addon ingress=true in "addons-713277"
	I1210 22:26:33.554350   10419 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-713277"
	I1210 22:26:33.554355   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554359   10419 addons.go:70] Setting registry-creds=true in profile "addons-713277"
	I1210 22:26:33.554372   10419 addons.go:239] Setting addon registry-creds=true in "addons-713277"
	I1210 22:26:33.554379   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554391   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554391   10419 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-713277"
	I1210 22:26:33.554418   10419 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-713277"
	I1210 22:26:33.554450   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554747   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554805   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554818   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554839   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554843   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554862   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.554986   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.555786   10419 addons.go:70] Setting gcp-auth=true in profile "addons-713277"
	I1210 22:26:33.555815   10419 mustload.go:66] Loading cluster: addons-713277
	I1210 22:26:33.556003   10419 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:33.556281   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557287   10419 addons.go:70] Setting storage-provisioner=true in profile "addons-713277"
	I1210 22:26:33.554298   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557409   10419 addons.go:70] Setting metrics-server=true in profile "addons-713277"
	I1210 22:26:33.557623   10419 addons.go:239] Setting addon metrics-server=true in "addons-713277"
	I1210 22:26:33.557760   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557424   10419 addons.go:70] Setting volumesnapshots=true in profile "addons-713277"
	I1210 22:26:33.557905   10419 addons.go:239] Setting addon volumesnapshots=true in "addons-713277"
	I1210 22:26:33.558361   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.558399   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557459   10419 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-713277"
	I1210 22:26:33.558677   10419 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-713277"
	I1210 22:26:33.558850   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.558942   10419 out.go:179] * Verifying Kubernetes components...
	I1210 22:26:33.559839   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.558952   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557495   10419 addons.go:70] Setting ingress-dns=true in profile "addons-713277"
	I1210 22:26:33.560195   10419 addons.go:239] Setting addon ingress-dns=true in "addons-713277"
	I1210 22:26:33.560231   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.554221   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.561101   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.557540   10419 addons.go:70] Setting registry=true in profile "addons-713277"
	I1210 22:26:33.563161   10419 addons.go:239] Setting addon registry=true in "addons-713277"
	I1210 22:26:33.563197   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.563599   10419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:33.557554   10419 addons.go:239] Setting addon storage-provisioner=true in "addons-713277"
	I1210 22:26:33.564301   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.557469   10419 addons.go:70] Setting volcano=true in profile "addons-713277"
	I1210 22:26:33.565397   10419 addons.go:239] Setting addon volcano=true in "addons-713277"
	I1210 22:26:33.565429   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.565976   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.568830   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.569025   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.570143   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.604966   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 22:26:33.606939   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 22:26:33.610420   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.617188   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 22:26:33.619666   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 22:26:33.622411   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 22:26:33.622487   10419 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 22:26:33.623847   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 22:26:33.625138   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 22:26:33.625528   10419 addons.go:239] Setting addon default-storageclass=true in "addons-713277"
	I1210 22:26:33.625582   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.625756   10419 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:26:33.625771   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 22:26:33.625830   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.626239   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.627855   10419 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 22:26:33.627980   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 22:26:33.631375   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 22:26:33.631397   10419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 22:26:33.631454   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.631628   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 22:26:33.631637   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 22:26:33.631697   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.639960   10419 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 22:26:33.645317   10419 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 22:26:33.647494   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:33.647504   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 22:26:33.647523   10419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 22:26:33.647530   10419 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 22:26:33.647596   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.647724   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 22:26:33.647739   10419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 22:26:33.647804   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.649164   10419 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:26:33.649181   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 22:26:33.649229   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.651933   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:33.652076   10419 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 22:26:33.653694   10419 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 22:26:33.654844   10419 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 22:26:33.654865   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 22:26:33.654925   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.655552   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 22:26:33.656795   10419 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:26:33.656815   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 22:26:33.656857   10419 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:26:33.656862   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.656869   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 22:26:33.656917   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.656797   10419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 22:26:33.663614   10419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:26:33.663679   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 22:26:33.663797   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.668911   10419 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-713277"
	I1210 22:26:33.668954   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:33.669438   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:33.670735   10419 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 22:26:33.671997   10419 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 22:26:33.672021   10419 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:26:33.672055   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 22:26:33.672135   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.674511   10419 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 22:26:33.674769   10419 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:26:33.674800   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 22:26:33.674963   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.677541   10419 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 22:26:33.678748   10419 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 22:26:33.678807   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 22:26:33.678907   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	W1210 22:26:33.682005   10419 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 22:26:33.698349   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.716793   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.717545   10419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 22:26:33.717562   10419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 22:26:33.717625   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.726088   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.728224   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.730653   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.731187   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.732046   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.732874   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.744606   10419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 22:26:33.749668   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.751251   10419 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 22:26:33.751823   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.754759   10419 out.go:179]   - Using image docker.io/busybox:stable
	I1210 22:26:33.756776   10419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:26:33.756800   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 22:26:33.756856   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:33.758807   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.760708   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.763582   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.765552   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.792948   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:33.803320   10419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:26:33.890053   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 22:26:33.890097   10419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 22:26:33.896429   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 22:26:33.896449   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 22:26:33.896473   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:26:33.910306   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 22:26:33.910331   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 22:26:33.913233   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:26:33.913852   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 22:26:33.913870   10419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 22:26:33.927941   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 22:26:33.927966   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 22:26:33.937403   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:26:33.943213   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:26:33.944673   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:26:33.946692   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 22:26:33.950563   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:26:33.953085   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 22:26:33.954434   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:26:33.955273   10419 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 22:26:33.955293   10419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 22:26:33.955847   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 22:26:33.955862   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 22:26:33.961102   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 22:26:33.961122   10419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 22:26:33.964593   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 22:26:33.964611   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 22:26:33.971909   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:26:33.974544   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 22:26:33.974615   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 22:26:33.992825   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 22:26:33.992936   10419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 22:26:34.007962   10419 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:26:34.007982   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 22:26:34.009673   10419 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 22:26:34.009748   10419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 22:26:34.037869   10419 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:26:34.037905   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 22:26:34.038411   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 22:26:34.038430   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 22:26:34.056182   10419 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:26:34.056205   10419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 22:26:34.060383   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:26:34.080594   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 22:26:34.080615   10419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 22:26:34.093968   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:26:34.107665   10419 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 22:26:34.107698   10419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 22:26:34.124071   10419 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 22:26:34.125830   10419 node_ready.go:35] waiting up to 6m0s for node "addons-713277" to be "Ready" ...
	I1210 22:26:34.134235   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:26:34.143738   10419 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:34.143758   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 22:26:34.176332   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 22:26:34.176358   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 22:26:34.207875   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:34.247706   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 22:26:34.247733   10419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 22:26:34.340846   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 22:26:34.340959   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 22:26:34.412939   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 22:26:34.413023   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 22:26:34.463862   10419 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:26:34.463890   10419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 22:26:34.526146   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:26:34.631503   10419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-713277" context rescaled to 1 replicas
	I1210 22:26:35.123226   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.168760027s)
	I1210 22:26:35.123269   10419 addons.go:495] Verifying addon ingress=true in "addons-713277"
	I1210 22:26:35.123604   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.151663997s)
	I1210 22:26:35.123668   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.0632392s)
	I1210 22:26:35.123697   10419 addons.go:495] Verifying addon registry=true in "addons-713277"
	I1210 22:26:35.123814   10419 addons.go:495] Verifying addon metrics-server=true in "addons-713277"
	I1210 22:26:35.123743   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.029684117s)
	I1210 22:26:35.125013   10419 out.go:179] * Verifying registry addon...
	I1210 22:26:35.125016   10419 out.go:179] * Verifying ingress addon...
	I1210 22:26:35.126222   10419 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-713277 service yakd-dashboard -n yakd-dashboard
	
	I1210 22:26:35.128223   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 22:26:35.128318   10419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 22:26:35.131145   10419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 22:26:35.131163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:35.131342   10419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 22:26:35.131357   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:35.490404   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.282421489s)
	W1210 22:26:35.490449   10419 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:26:35.490482   10419 retry.go:31] will retry after 148.382508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:26:35.490721   10419 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-713277"
	I1210 22:26:35.496063   10419 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 22:26:35.498303   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 22:26:35.502385   10419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 22:26:35.502409   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:35.630805   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:35.630981   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:35.638986   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:26:36.002182   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:36.128460   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:36.130732   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:36.130857   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:36.501710   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:36.631235   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:36.631280   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:37.001951   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:37.131017   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:37.131091   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:37.501098   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:37.630498   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:37.630668   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:38.001762   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:38.105801   10419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.466768693s)
	I1210 22:26:38.131408   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:38.131587   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:38.501920   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:38.628261   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:38.630689   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:38.630751   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:39.001181   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:39.131421   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:39.131597   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:39.501903   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:39.631396   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:39.631460   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:40.001301   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:40.130982   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:40.131031   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:40.501896   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:40.628984   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:40.630875   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:40.630992   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.001320   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:41.131554   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:41.131699   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.227234   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 22:26:41.227296   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:41.245487   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:41.347137   10419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 22:26:41.359810   10419 addons.go:239] Setting addon gcp-auth=true in "addons-713277"
	I1210 22:26:41.359861   10419 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:26:41.360222   10419 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:26:41.377359   10419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 22:26:41.377419   10419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:26:41.394997   10419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:26:41.489913   10419 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 22:26:41.491394   10419 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:26:41.492577   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 22:26:41.492593   10419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 22:26:41.501740   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:41.505952   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 22:26:41.505974   10419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 22:26:41.518961   10419 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:26:41.518983   10419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 22:26:41.531416   10419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:26:41.631105   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:41.631296   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:41.828981   10419 addons.go:495] Verifying addon gcp-auth=true in "addons-713277"
	I1210 22:26:41.830199   10419 out.go:179] * Verifying gcp-auth addon...
	I1210 22:26:41.832298   10419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 22:26:41.834446   10419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 22:26:41.834462   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:42.001289   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:42.130905   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:42.131006   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:42.335503   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:42.501486   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:42.629042   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:42.631024   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:42.631266   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:42.835112   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:43.001631   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:43.131329   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:43.131409   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:43.334939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:43.501706   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:43.630417   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:43.630638   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:43.835052   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:44.001748   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:44.131163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:44.131278   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:44.335934   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:44.501952   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:44.630419   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:44.630565   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:44.835036   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:45.001706   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:45.129303   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:45.131003   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:45.131223   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:45.335772   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:45.501721   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:45.630953   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:45.631148   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:45.835451   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:46.001240   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:46.130747   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:46.130987   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:46.335314   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:46.502175   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:46.630242   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:46.630349   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:46.834776   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:47.001387   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:47.130936   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:47.131098   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:47.335540   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:47.501391   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:47.629270   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:47.630765   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:47.630949   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:47.835942   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:48.001543   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:48.130860   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:48.130972   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:48.335570   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:48.502002   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:48.630390   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:48.630492   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:48.835085   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:49.001721   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:49.131136   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:49.131324   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:49.334990   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:49.501449   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:49.631007   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:49.631242   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:49.835683   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:50.001434   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:50.129091   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:50.130797   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:50.131138   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:50.335592   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:50.501703   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:50.631440   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:50.631585   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:50.835152   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:51.001865   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:51.130862   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:51.131213   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:51.335671   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:51.501071   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:51.630674   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:51.630760   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:51.835360   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:52.000806   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:52.129275   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:52.131154   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:52.131207   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:52.335700   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:52.501797   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:52.631096   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:52.631272   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:52.834667   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:53.001340   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:53.131016   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:53.131105   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:53.335744   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:53.501474   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:53.631317   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:53.631622   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:53.834958   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:54.001519   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:54.129424   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:54.131323   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:54.131550   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:54.334939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:54.501942   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:54.630345   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:54.630585   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:54.834939   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:55.001896   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:55.130504   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:55.130677   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:55.334967   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:55.501726   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:55.631158   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:55.631289   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:55.835803   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:56.001265   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:56.130725   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:56.130905   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:56.335222   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:56.502099   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:56.628680   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:56.630491   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:56.630577   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:56.835430   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:57.001177   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:57.130509   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:57.130761   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:57.335107   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:57.502094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:57.630506   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:57.630604   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:57.834821   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:58.001780   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:58.130925   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:58.130990   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:58.335550   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:58.501273   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:26:58.628796   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:26:58.630630   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:58.630831   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:58.835434   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:59.001613   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:59.130946   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:59.131214   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:59.335524   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:26:59.501167   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:26:59.630199   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:26:59.630351   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:26:59.834758   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:00.001205   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:00.130324   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:00.130509   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:00.334669   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:00.501602   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:00.629221   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:00.630750   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:00.630952   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:00.835438   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:01.000867   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:01.131049   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:01.131269   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:01.335719   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:01.501354   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:01.630511   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:01.630739   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:01.835224   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:02.001867   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:02.131143   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:02.131245   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:02.335505   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:02.501425   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:02.630759   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:02.630886   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:02.835414   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:03.000676   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:03.128972   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:03.130885   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:03.130903   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:03.335470   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:03.501242   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:03.630469   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:03.630607   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:03.835190   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:04.001760   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:04.131098   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:04.131291   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:04.334870   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:04.501620   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:04.630659   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:04.631055   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:04.835743   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:05.001163   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:05.130545   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:05.130709   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:05.335268   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:05.501290   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:05.628874   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:05.630478   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:05.630603   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:05.835137   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:06.001704   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:06.131153   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:06.131158   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:06.335636   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:06.501316   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:06.630617   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:06.630848   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:06.835249   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:07.001701   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:07.131296   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:07.131450   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:07.335137   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:07.500758   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:07.629578   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:07.631374   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:07.631404   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:07.834833   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:08.001444   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:08.130869   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:08.131038   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:08.335789   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:08.501518   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:08.630825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:08.631031   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:08.835619   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:09.001444   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:09.130658   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:09.130817   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:09.335420   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:09.501039   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:09.630318   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:09.630417   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:09.834987   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:10.001625   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:10.129362   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:10.131054   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:10.131195   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:10.334813   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:10.501441   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:10.630729   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:10.630881   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:10.835505   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:11.001121   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:11.130447   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:11.130684   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:11.335438   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:11.501002   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:11.631268   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:11.631539   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:11.834959   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:12.001542   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:12.130732   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:12.130937   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:12.335493   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:12.501058   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 22:27:12.628731   10419 node_ready.go:57] node "addons-713277" has "Ready":"False" status (will retry)
	I1210 22:27:12.630424   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:12.630586   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:12.835257   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:13.001767   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:13.130877   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:13.130944   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:13.335416   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:13.501111   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:13.630517   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:13.630731   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:13.835106   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:14.006819   10419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 22:27:14.006846   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:14.128858   10419 node_ready.go:49] node "addons-713277" is "Ready"
	I1210 22:27:14.128893   10419 node_ready.go:38] duration metric: took 40.003040499s for node "addons-713277" to be "Ready" ...
	I1210 22:27:14.128910   10419 api_server.go:52] waiting for apiserver process to appear ...
	I1210 22:27:14.128985   10419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:27:14.130787   10419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 22:27:14.130808   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:14.130962   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:14.144802   10419 api_server.go:72] duration metric: took 40.590791282s to wait for apiserver process to appear ...
	I1210 22:27:14.144833   10419 api_server.go:88] waiting for apiserver healthz status ...
	I1210 22:27:14.144857   10419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 22:27:14.148901   10419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 22:27:14.149819   10419 api_server.go:141] control plane version: v1.34.2
	I1210 22:27:14.149845   10419 api_server.go:131] duration metric: took 5.004057ms to wait for apiserver health ...
	I1210 22:27:14.149856   10419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 22:27:14.152842   10419 system_pods.go:59] 20 kube-system pods found
	I1210 22:27:14.152872   10419 system_pods.go:61] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending
	I1210 22:27:14.152883   10419 system_pods.go:61] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.152895   10419 system_pods.go:61] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.152908   10419 system_pods.go:61] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.152918   10419 system_pods.go:61] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending
	I1210 22:27:14.152930   10419 system_pods.go:61] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.152938   10419 system_pods.go:61] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.152943   10419 system_pods.go:61] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.152951   10419 system_pods.go:61] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.152964   10419 system_pods.go:61] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.152972   10419 system_pods.go:61] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.152982   10419 system_pods.go:61] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.152987   10419 system_pods.go:61] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.152995   10419 system_pods.go:61] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending
	I1210 22:27:14.153006   10419 system_pods.go:61] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.153020   10419 system_pods.go:61] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.153029   10419 system_pods.go:61] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending
	I1210 22:27:14.153040   10419 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.153049   10419 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending
	I1210 22:27:14.153059   10419 system_pods.go:61] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.153070   10419 system_pods.go:74] duration metric: took 3.206813ms to wait for pod list to return data ...
	I1210 22:27:14.153080   10419 default_sa.go:34] waiting for default service account to be created ...
	I1210 22:27:14.155059   10419 default_sa.go:45] found service account: "default"
	I1210 22:27:14.155079   10419 default_sa.go:55] duration metric: took 1.98908ms for default service account to be created ...
	I1210 22:27:14.155089   10419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 22:27:14.159275   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.159305   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending
	I1210 22:27:14.159315   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.159325   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.159335   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.159345   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending
	I1210 22:27:14.159351   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.159357   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.159362   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.159369   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.159382   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.159394   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.159402   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.159412   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.159418   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending
	I1210 22:27:14.159434   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.159445   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.159450   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending
	I1210 22:27:14.159458   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.159464   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending
	I1210 22:27:14.159472   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.159488   10419 retry.go:31] will retry after 222.081605ms: missing components: kube-dns
	I1210 22:27:14.336445   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:14.443843   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.443888   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:14.443899   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:14.443910   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.443920   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.443929   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:14.443937   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.443944   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.443950   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.443956   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.443965   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.443971   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.443977   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.443986   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.443997   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:14.444013   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.444021   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.444032   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:14.444046   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.444058   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.444069   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:14.444087   10419 retry.go:31] will retry after 259.327228ms: missing components: kube-dns
	I1210 22:27:14.538036   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:14.638269   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:14.638311   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:14.741312   10419 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:14.741351   10419 system_pods.go:89] "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:14.741360   10419 system_pods.go:89] "coredns-66bc5c9577-q7vb5" [91237fa6-7040-44d9-869b-df5ec43c41dd] Running
	I1210 22:27:14.741372   10419 system_pods.go:89] "csi-hostpath-attacher-0" [992cee28-e648-42d5-9562-b4c3b3823750] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:14.741381   10419 system_pods.go:89] "csi-hostpath-resizer-0" [1cffaf4d-f891-40d8-96f9-15426d4f1855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:14.741390   10419 system_pods.go:89] "csi-hostpathplugin-hswm7" [426f321b-3f3a-460d-ac24-bed6aec96fce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:14.741397   10419 system_pods.go:89] "etcd-addons-713277" [871fcf3c-ce84-4f74-bce7-8ebd11959b12] Running
	I1210 22:27:14.741403   10419 system_pods.go:89] "kindnet-cjq4d" [caae1124-a57d-4946-a662-a94796ced28a] Running
	I1210 22:27:14.741409   10419 system_pods.go:89] "kube-apiserver-addons-713277" [aea63af4-0b20-4954-a3e6-5d3d3724e62a] Running
	I1210 22:27:14.741414   10419 system_pods.go:89] "kube-controller-manager-addons-713277" [b4c184b3-b1af-4be3-b123-d157c4a5fcaa] Running
	I1210 22:27:14.741422   10419 system_pods.go:89] "kube-ingress-dns-minikube" [c6c29966-9977-452c-a970-b5841386e26a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:14.741427   10419 system_pods.go:89] "kube-proxy-mtnxn" [5a74be49-0d7e-4ca2-bce7-2d02ceb9a72d] Running
	I1210 22:27:14.741434   10419 system_pods.go:89] "kube-scheduler-addons-713277" [3e0cc29d-ef0e-42b8-ae8f-bf445b762f58] Running
	I1210 22:27:14.741441   10419 system_pods.go:89] "metrics-server-85b7d694d7-f8kpc" [e178a6a5-9362-4069-b088-6b626c0ec1ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:14.741451   10419 system_pods.go:89] "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:14.741459   10419 system_pods.go:89] "registry-6b586f9694-95ck7" [2a6e4aa5-fb32-4bc9-9dcb-b14cd760d720] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:14.741468   10419 system_pods.go:89] "registry-creds-764b6fb674-dkzdq" [0fd0837e-3f9d-4230-9f13-bc89297e4d0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:14.741478   10419 system_pods.go:89] "registry-proxy-tlfbx" [a71b578d-f7dd-42dd-8f6a-cc3e292aa98c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:14.741485   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwmd4" [28f651af-6584-41d8-b93f-af7703574bee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.741496   10419 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w5cgz" [40d96c58-b195-42f5-8208-eba7013862c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:14.741501   10419 system_pods.go:89] "storage-provisioner" [815fdc0f-9123-4aaf-8cc0-3b31880fb6da] Running
	I1210 22:27:14.741511   10419 system_pods.go:126] duration metric: took 586.415394ms to wait for k8s-apps to be running ...
	I1210 22:27:14.741521   10419 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 22:27:14.741571   10419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:27:14.760225   10419 system_svc.go:56] duration metric: took 18.694973ms WaitForService to wait for kubelet
	I1210 22:27:14.760257   10419 kubeadm.go:587] duration metric: took 41.206250301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:27:14.760294   10419 node_conditions.go:102] verifying NodePressure condition ...
	I1210 22:27:14.764445   10419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 22:27:14.764475   10419 node_conditions.go:123] node cpu capacity is 8
	I1210 22:27:14.764492   10419 node_conditions.go:105] duration metric: took 4.192291ms to run NodePressure ...
	I1210 22:27:14.764508   10419 start.go:242] waiting for startup goroutines ...
	I1210 22:27:14.839990   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:15.002305   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:15.133555   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:15.134816   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:15.335431   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:15.501588   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:15.631440   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:15.631566   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:15.835038   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:16.002025   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:16.131482   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:16.131626   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:16.335592   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:16.501825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:16.631963   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:16.632133   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:16.835958   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:17.002342   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:17.132359   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:17.132360   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:17.335275   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:17.501616   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:17.631699   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:17.632107   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:17.836259   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:18.002237   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:18.132295   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:18.132325   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:18.335803   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:18.502186   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:18.632569   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:18.632785   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:18.835358   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:19.001693   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:19.131702   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:19.131715   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:19.335345   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:19.501811   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:19.631491   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:19.631496   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:19.834907   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:20.002161   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:20.132727   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.132761   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:20.335730   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:20.502380   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:20.631604   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.631639   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:20.835923   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:21.002564   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:21.131310   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.131542   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.335938   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:21.501868   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:21.631717   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.631869   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.835786   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:22.002871   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.134280   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.134325   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.336693   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:22.501443   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.631761   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.631999   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.836038   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.002381   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.131212   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.131345   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.336125   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.502024   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.632094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.632150   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.836249   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.002308   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.131790   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.131875   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.335442   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.501833   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.635040   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.635051   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.835713   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.002021   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.132093   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.132181   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.336032   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.501865   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.631450   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.631552   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.836416   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.001682   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.131734   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.131850   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.335239   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.502492   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.631047   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.631291   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.834842   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.002146   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.131689   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.131742   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.335205   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.501579   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.633784   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.633885   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.835799   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.001824   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.131781   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.131826   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.335296   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.501025   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.656919   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.701841   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.835825   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.001879   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.131708   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.131901   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.335627   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.501971   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.632292   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.632314   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.836295   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.001856   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.131812   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.131948   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.336094   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.502476   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.631347   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.631447   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.835082   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.002341   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.131303   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.131404   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.335899   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.502775   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.630510   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.630807   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.835048   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.001874   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.131674   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.131725   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.335117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.502487   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.642245   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.642309   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.946955   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.049254   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.131974   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.132073   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.335725   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.501870   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.634535   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.634831   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.835206   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.002536   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.130926   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.131028   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.335284   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.502687   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.631427   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.631661   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.835324   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.002090   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.131601   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.131633   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.334957   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.501686   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.631555   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.631905   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.835828   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.002794   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.133977   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.134361   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.335241   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.502469   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.631289   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.631332   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.835638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.063873   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.132166   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.132197   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.334937   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.502320   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.631957   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.632044   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.835922   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.002136   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.131638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.131917   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.335139   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.501932   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.631871   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.631920   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.835826   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:39.002280   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:39.132311   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:39.132354   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:39.335638   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:39.501568   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:39.631117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:39.631308   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:39.836503   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.002390   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.132510   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.132806   10419 kapi.go:107] duration metric: took 1m5.004584186s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 22:27:40.335397   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.502087   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.631503   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.835403   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.002060   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.131615   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.335091   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.503132   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.631798   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.835933   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.002314   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.132470   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.335243   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.501545   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.632070   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.835805   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.002191   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.132511   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.334853   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.502366   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.631246   10419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.836635   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.002753   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.131603   10419 kapi.go:107] duration metric: took 1m9.003281378s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 22:27:44.341292   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.501900   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.835768   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.002688   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.335528   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.501751   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.835788   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.003479   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.336187   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.502310   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.834956   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.002594   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.335771   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.502724   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.835061   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.002276   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.334922   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.502303   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.835895   10419 kapi.go:107] duration metric: took 1m7.003598985s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 22:27:48.837742   10419 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-713277 cluster.
	I1210 22:27:48.839097   10419 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 22:27:48.840503   10419 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 22:27:49.002132   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:49.502735   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.002509   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.502389   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.002117   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.501767   10419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:52.002451   10419 kapi.go:107] duration metric: took 1m16.504148561s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 22:27:52.004598   10419 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, registry-creds, storage-provisioner, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 22:27:52.006147   10419 addons.go:530] duration metric: took 1m18.452106426s for enable addons: enabled=[nvidia-device-plugin ingress-dns registry-creds storage-provisioner cloud-spanner amd-gpu-device-plugin inspektor-gadget default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 22:27:52.006198   10419 start.go:247] waiting for cluster config update ...
	I1210 22:27:52.006225   10419 start.go:256] writing updated cluster config ...
	I1210 22:27:52.006484   10419 ssh_runner.go:195] Run: rm -f paused
	I1210 22:27:52.010377   10419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:27:52.013500   10419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7vb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.017235   10419 pod_ready.go:94] pod "coredns-66bc5c9577-q7vb5" is "Ready"
	I1210 22:27:52.017253   10419 pod_ready.go:86] duration metric: took 3.734284ms for pod "coredns-66bc5c9577-q7vb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.018927   10419 pod_ready.go:83] waiting for pod "etcd-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.022048   10419 pod_ready.go:94] pod "etcd-addons-713277" is "Ready"
	I1210 22:27:52.022065   10419 pod_ready.go:86] duration metric: took 3.120005ms for pod "etcd-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.023537   10419 pod_ready.go:83] waiting for pod "kube-apiserver-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.027982   10419 pod_ready.go:94] pod "kube-apiserver-addons-713277" is "Ready"
	I1210 22:27:52.028001   10419 pod_ready.go:86] duration metric: took 4.448174ms for pod "kube-apiserver-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.029594   10419 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.414295   10419 pod_ready.go:94] pod "kube-controller-manager-addons-713277" is "Ready"
	I1210 22:27:52.414324   10419 pod_ready.go:86] duration metric: took 384.71098ms for pod "kube-controller-manager-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:52.614169   10419 pod_ready.go:83] waiting for pod "kube-proxy-mtnxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.013425   10419 pod_ready.go:94] pod "kube-proxy-mtnxn" is "Ready"
	I1210 22:27:53.013451   10419 pod_ready.go:86] duration metric: took 399.26052ms for pod "kube-proxy-mtnxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.214674   10419 pod_ready.go:83] waiting for pod "kube-scheduler-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.613718   10419 pod_ready.go:94] pod "kube-scheduler-addons-713277" is "Ready"
	I1210 22:27:53.613745   10419 pod_ready.go:86] duration metric: took 399.042807ms for pod "kube-scheduler-addons-713277" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:27:53.613761   10419 pod_ready.go:40] duration metric: took 1.60335508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:27:53.656248   10419 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 22:27:53.659036   10419 out.go:179] * Done! kubectl is now configured to use "addons-713277" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 22:27:51 addons-713277 crio[775]: time="2025-12-10T22:27:51.081156426Z" level=info msg="Starting container: 4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184" id=91a1d28a-7859-4c70-8b0e-f1dc5ef4d564 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 22:27:51 addons-713277 crio[775]: time="2025-12-10T22:27:51.084202756Z" level=info msg="Started container" PID=6111 containerID=4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184 description=kube-system/csi-hostpathplugin-hswm7/csi-snapshotter id=91a1d28a-7859-4c70-8b0e-f1dc5ef4d564 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce8ac9672fef63967ad7cba20c159ddfb9b1fa2838fd95282bf459c2a8e6c397
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.490417291Z" level=info msg="Running pod sandbox: default/busybox/POD" id=133e0c89-c0e4-43bf-af37-c07eaa6dacb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.490474421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.495990303Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3100f483947a280f6884865ddaf8c0ec2b702d1e90491578b43a468174466c0 UID:11388322-7f1e-4c85-84e7-f8e3566769a7 NetNS:/var/run/netns/34682045-9140-4cc1-aa61-87bc6e0b76d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00062c878}] Aliases:map[]}"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.496021746Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.505484538Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3100f483947a280f6884865ddaf8c0ec2b702d1e90491578b43a468174466c0 UID:11388322-7f1e-4c85-84e7-f8e3566769a7 NetNS:/var/run/netns/34682045-9140-4cc1-aa61-87bc6e0b76d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00062c878}] Aliases:map[]}"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.505600195Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.50645118Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.507199022Z" level=info msg="Ran pod sandbox f3100f483947a280f6884865ddaf8c0ec2b702d1e90491578b43a468174466c0 with infra container: default/busybox/POD" id=133e0c89-c0e4-43bf-af37-c07eaa6dacb8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.508334489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c22bd76-2435-4add-8565-fcc26fc76c07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.508460029Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c22bd76-2435-4add-8565-fcc26fc76c07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.508502986Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9c22bd76-2435-4add-8565-fcc26fc76c07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.509014886Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=963aefca-9397-4333-9c46-d2dd5f4085e9 name=/runtime.v1.ImageService/PullImage
	Dec 10 22:27:54 addons-713277 crio[775]: time="2025-12-10T22:27:54.510524548Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.952411281Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=963aefca-9397-4333-9c46-d2dd5f4085e9 name=/runtime.v1.ImageService/PullImage
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.952993239Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10a08628-508b-4335-a071-4c5c22dc8e43 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.95425551Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b029d52-7de2-4e0b-8f8d-c9c4a1b39c13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.958047789Z" level=info msg="Creating container: default/busybox/busybox" id=f76e6e45-d92c-45b1-908c-7e04384bb044 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.958188986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.96320796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.963638626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.993569871Z" level=info msg="Created container 9f42c59cb2d29ff4ea061d3dee82f45a242e32e9a2c8b3f574a67d13a12366bd: default/busybox/busybox" id=f76e6e45-d92c-45b1-908c-7e04384bb044 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.994215527Z" level=info msg="Starting container: 9f42c59cb2d29ff4ea061d3dee82f45a242e32e9a2c8b3f574a67d13a12366bd" id=f9f89175-3c2d-4ae0-bac7-c4cb6a601ba2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 22:27:55 addons-713277 crio[775]: time="2025-12-10T22:27:55.996022509Z" level=info msg="Started container" PID=6232 containerID=9f42c59cb2d29ff4ea061d3dee82f45a242e32e9a2c8b3f574a67d13a12366bd description=default/busybox/busybox id=f9f89175-3c2d-4ae0-bac7-c4cb6a601ba2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3100f483947a280f6884865ddaf8c0ec2b702d1e90491578b43a468174466c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9f42c59cb2d29       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   f3100f483947a       busybox                                     default
	4c9bba5f39f38       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	74c081e28286c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	48438e9e3f252       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	ae6e6a01168ed       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   8d6764f406a8e       gcp-auth-78565c9fb4-xcp2p                   gcp-auth
	2163a8cf9861c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	5abf94cf2bb20       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            19 seconds ago       Running             gadget                                   0                   f87838fd8c17e       gadget-9zvtj                                gadget
	2450e25bed015       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	a99aabaa871bf       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             22 seconds ago       Running             controller                               0                   fd748af321177       ingress-nginx-controller-85d4c799dd-f4mfr   ingress-nginx
	d7718e5cd6534       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             22 seconds ago       Exited              patch                                    2                   2969902d131d3       ingress-nginx-admission-patch-5hp7s         ingress-nginx
	165ba560b21ce       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago       Running             registry-proxy                           0                   fc647661f67a3       registry-proxy-tlfbx                        kube-system
	28bfb1217531d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   70aa7b2637f08       nvidia-device-plugin-daemonset-xz7l5        kube-system
	8fc592d7667df       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     30 seconds ago       Running             amd-gpu-device-plugin                    0                   62bbcb9c1106b       amd-gpu-device-plugin-9zlkh                 kube-system
	1db25aab3edc4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   31 seconds ago       Running             csi-external-health-monitor-controller   0                   ce8ac9672fef6       csi-hostpathplugin-hswm7                    kube-system
	04ea9bfa0bce4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             32 seconds ago       Running             csi-attacher                             0                   6bee991f8107e       csi-hostpath-attacher-0                     kube-system
	6ae10e6bd3d43       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   5ff9db8ed6958       snapshot-controller-7d9fbc56b8-rwmd4        kube-system
	32b2d9f8ab360       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             33 seconds ago       Exited              patch                                    1                   700ea79518f16       gcp-auth-certs-patch-7w5mw                  gcp-auth
	979e705cc3192       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   f61fde7b45534       snapshot-controller-7d9fbc56b8-w5cgz        kube-system
	e45fa578f33f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago       Exited              create                                   0                   a7f011b572b15       gcp-auth-certs-create-bbj5k                 gcp-auth
	079244ec7bd48       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   516e758d4c1aa       csi-hostpath-resizer-0                      kube-system
	a327a461cdc90       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              35 seconds ago       Running             yakd                                     0                   35a15ea941931       yakd-dashboard-5ff678cb9-7ccv7              yakd-dashboard
	aca086177b901       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   38 seconds ago       Exited              create                                   0                   58fda7a7f0615       ingress-nginx-admission-create-8t8lk        ingress-nginx
	d3ada68a097ba       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           38 seconds ago       Running             registry                                 0                   dbcc79f1967b3       registry-6b586f9694-95ck7                   kube-system
	da0b5c6014eca       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             40 seconds ago       Running             local-path-provisioner                   0                   9bd519cc4093f       local-path-provisioner-648f6765c9-nzlgq     local-path-storage
	0f28ecb799a2a       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               41 seconds ago       Running             cloud-spanner-emulator                   0                   c72c25acc12f6       cloud-spanner-emulator-5bdddb765-lw7mn      default
	bb607f8a94b39       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        43 seconds ago       Running             metrics-server                           0                   c62554d2dd6a7       metrics-server-85b7d694d7-f8kpc             kube-system
	b4b4d4119a9e0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               45 seconds ago       Running             minikube-ingress-dns                     0                   c860bd0c9e388       kube-ingress-dns-minikube                   kube-system
	32ba87316889a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             50 seconds ago       Running             coredns                                  0                   750ec8d05964f       coredns-66bc5c9577-q7vb5                    kube-system
	1823e7451c0fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             50 seconds ago       Running             storage-provisioner                      0                   ce83b3201bb4b       storage-provisioner                         kube-system
	23b97f2410dd1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   128858036ffe3       kube-proxy-mtnxn                            kube-system
	bef4905bf4d28       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   1d65c854a47b4       kindnet-cjq4d                               kube-system
	41f1ac5834be0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   7a839a1013562       kube-scheduler-addons-713277                kube-system
	a19c2cf65ed7f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   e0c7ec1eb0b65       kube-apiserver-addons-713277                kube-system
	5f60ada2aeca2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   240cd6deb42b6       etcd-addons-713277                          kube-system
	7e9f40ca0ad08       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   7f705f8105d84       kube-controller-manager-addons-713277       kube-system
	
	
	==> coredns [32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca] <==
	[INFO] 10.244.0.17:42021 - 240 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000148987s
	[INFO] 10.244.0.17:53444 - 60903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117986s
	[INFO] 10.244.0.17:53444 - 60446 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121103s
	[INFO] 10.244.0.17:40396 - 16243 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.0000789s
	[INFO] 10.244.0.17:40396 - 15868 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000090165s
	[INFO] 10.244.0.17:41947 - 45019 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00007471s
	[INFO] 10.244.0.17:41947 - 44769 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083298s
	[INFO] 10.244.0.17:55492 - 48041 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000043921s
	[INFO] 10.244.0.17:55492 - 48311 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000084102s
	[INFO] 10.244.0.17:53432 - 28588 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115724s
	[INFO] 10.244.0.17:53432 - 28379 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161393s
	[INFO] 10.244.0.22:55528 - 61611 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245493s
	[INFO] 10.244.0.22:41224 - 57331 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000298215s
	[INFO] 10.244.0.22:60934 - 59956 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168211s
	[INFO] 10.244.0.22:56468 - 49013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196739s
	[INFO] 10.244.0.22:54987 - 36463 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118724s
	[INFO] 10.244.0.22:53409 - 38271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152617s
	[INFO] 10.244.0.22:51103 - 57088 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005809622s
	[INFO] 10.244.0.22:47589 - 37312 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008694252s
	[INFO] 10.244.0.22:50642 - 61793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005299168s
	[INFO] 10.244.0.22:46570 - 3705 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006843894s
	[INFO] 10.244.0.22:47458 - 32974 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006133263s
	[INFO] 10.244.0.22:56120 - 61020 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006177065s
	[INFO] 10.244.0.22:40838 - 5122 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001005577s
	[INFO] 10.244.0.22:40789 - 46888 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002263515s
	
	
	==> describe nodes <==
	Name:               addons-713277
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-713277
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=addons-713277
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_26_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-713277
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-713277"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:26:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-713277
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 22:27:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 22:27:59 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 22:27:59 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 22:27:59 +0000   Wed, 10 Dec 2025 22:26:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 22:27:59 +0000   Wed, 10 Dec 2025 22:27:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-713277
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                d2f88722-02be-4edd-a9d7-2da89e9e84d9
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-lw7mn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gadget                      gadget-9zvtj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gcp-auth                    gcp-auth-78565c9fb4-xcp2p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-f4mfr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-9zlkh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 coredns-66bc5c9577-q7vb5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-hswm7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 etcd-addons-713277                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         99s
	  kube-system                 kindnet-cjq4d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-addons-713277                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-addons-713277        200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-mtnxn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-addons-713277                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 metrics-server-85b7d694d7-f8kpc              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 nvidia-device-plugin-daemonset-xz7l5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 registry-6b586f9694-95ck7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-creds-764b6fb674-dkzdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-tlfbx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 snapshot-controller-7d9fbc56b8-rwmd4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-w5cgz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  local-path-storage          local-path-provisioner-648f6765c9-nzlgq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7ccv7               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-713277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-713277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-713277 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node addons-713277 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node addons-713277 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node addons-713277 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node addons-713277 event: Registered Node addons-713277 in Controller
	  Normal  NodeReady                52s                  kubelet          Node addons-713277 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 22:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001863] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.091008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.403764] i8042: Warning: Keylock active
	[  +0.012963] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.521495] block sda: the capability attribute has been deprecated.
	[  +0.094266] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026475] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.925136] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef] <==
	{"level":"warn","ts":"2025-12-10T22:26:24.682572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.689093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.696289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.711900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.718452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.724792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.731401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.739247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.745608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.771659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.779273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.787277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:24.840303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:35.860147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:26:35.866702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.267209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.274667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.292770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:02.305603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:27:32.945163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.375226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:27:32.945268Z","caller":"traceutil/trace.go:172","msg":"trace[1392252888] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1106; }","duration":"110.488177ms","start":"2025-12-10T22:27:32.834765Z","end":"2025-12-10T22:27:32.945253Z","steps":["trace[1392252888] 'agreement among raft nodes before linearized reading'  (duration: 47.616744ms)","trace[1392252888] 'range keys from in-memory index tree'  (duration: 62.733551ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:32.945308Z","caller":"traceutil/trace.go:172","msg":"trace[1033802621] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"182.215784ms","start":"2025-12-10T22:27:32.763076Z","end":"2025-12-10T22:27:32.945291Z","steps":["trace[1033802621] 'process raft request'  (duration: 119.349642ms)","trace[1033802621] 'compare'  (duration: 62.652756ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:36.834312Z","caller":"traceutil/trace.go:172","msg":"trace[1650918149] transaction","detail":"{read_only:false; response_revision:1137; number_of_response:1; }","duration":"179.39717ms","start":"2025-12-10T22:27:36.654884Z","end":"2025-12-10T22:27:36.834281Z","steps":["trace[1650918149] 'process raft request'  (duration: 96.594074ms)","trace[1650918149] 'compare'  (duration: 82.557407ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:37.061966Z","caller":"traceutil/trace.go:172","msg":"trace[833499783] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"148.365728ms","start":"2025-12-10T22:27:36.913581Z","end":"2025-12-10T22:27:37.061947Z","steps":["trace[833499783] 'process raft request'  (duration: 81.527996ms)","trace[833499783] 'compare'  (duration: 66.715264ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T22:27:49.988342Z","caller":"traceutil/trace.go:172","msg":"trace[1051301466] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"134.205396ms","start":"2025-12-10T22:27:49.854113Z","end":"2025-12-10T22:27:49.988319Z","steps":["trace[1051301466] 'process raft request'  (duration: 51.901972ms)","trace[1051301466] 'compare'  (duration: 82.070747ms)"],"step_count":2}
	
	
	==> gcp-auth [ae6e6a01168ed89fd4ee6ba681bee9a03fc8fc0d6654dc4ceaddd87cef212eff] <==
	2025/12/10 22:27:48 GCP Auth Webhook started!
	2025/12/10 22:27:53 Ready to marshal response ...
	2025/12/10 22:27:53 Ready to write response ...
	2025/12/10 22:27:54 Ready to marshal response ...
	2025/12/10 22:27:54 Ready to write response ...
	2025/12/10 22:27:54 Ready to marshal response ...
	2025/12/10 22:27:54 Ready to write response ...
	
	
	==> kernel <==
	 22:28:05 up 10 min,  0 user,  load average: 1.68, 0.84, 0.32
	Linux addons-713277 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc] <==
	I1210 22:26:33.489994       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T22:26:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 22:26:33.790320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 22:26:33.796310       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 22:26:33.796406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 22:26:33.796535       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 22:27:03.795679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 22:27:03.795688       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 22:27:03.795683       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 22:27:03.795762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1210 22:27:05.396567       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 22:27:05.396610       1 metrics.go:72] Registering metrics
	I1210 22:27:05.396691       1 controller.go:711] "Syncing nftables rules"
	I1210 22:27:13.794703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:27:13.794759       1 main.go:301] handling current node
	I1210 22:27:23.791043       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:27:23.791095       1 main.go:301] handling current node
	I1210 22:27:33.791328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:27:33.791360       1 main.go:301] handling current node
	I1210 22:27:43.790882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:27:43.790924       1 main.go:301] handling current node
	I1210 22:27:53.791146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:27:53.791172       1 main.go:301] handling current node
	I1210 22:28:03.790728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 22:28:03.790763       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b] <==
	E1210 22:27:22.592616       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.192.9:443: connect: connection refused" logger="UnhandledError"
	E1210 22:27:22.592734       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 22:27:22.593073       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.192.9:443: connect: connection refused" logger="UnhandledError"
	E1210 22:27:22.598282       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.192.9:443: connect: connection refused" logger="UnhandledError"
	W1210 22:27:23.594725       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 22:27:23.594745       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 22:27:23.594783       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 22:27:23.594802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1210 22:27:23.594827       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 22:27:23.595959       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 22:27:25.321344       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1210 22:27:27.625191       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 22:27:27.625218       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.192.9:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1210 22:27:27.625238       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 22:28:03.325332       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53324: use of closed network connection
	E1210 22:28:03.468678       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53358: use of closed network connection
	
	
	==> kube-controller-manager [7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326] <==
	I1210 22:26:32.249487       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 22:26:32.249569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 22:26:32.249774       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 22:26:32.249875       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 22:26:32.249890       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 22:26:32.249876       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 22:26:32.249960       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 22:26:32.250212       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 22:26:32.250230       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 22:26:32.250272       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 22:26:32.252514       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 22:26:32.255831       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 22:26:32.259220       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 22:26:32.264420       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 22:26:32.271889       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:26:32.275034       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1210 22:26:34.894323       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 22:27:02.260595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 22:27:02.260726       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 22:27:02.260774       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 22:27:02.281625       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 22:27:02.284897       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 22:27:02.361821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 22:27:02.385199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:27:17.207788       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36] <==
	I1210 22:26:33.365088       1 server_linux.go:53] "Using iptables proxy"
	I1210 22:26:33.430279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 22:26:33.530452       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 22:26:33.530529       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 22:26:33.530604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:26:33.550193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 22:26:33.550249       1 server_linux.go:132] "Using iptables Proxier"
	I1210 22:26:33.556424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:26:33.563054       1 server.go:527] "Version info" version="v1.34.2"
	I1210 22:26:33.563389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:26:33.565348       1 config.go:200] "Starting service config controller"
	I1210 22:26:33.566205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:26:33.566131       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:26:33.566975       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:26:33.566143       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:26:33.567131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:26:33.566923       1 config.go:309] "Starting node config controller"
	I1210 22:26:33.567222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:26:33.669126       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 22:26:33.669218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 22:26:33.669333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:26:33.672847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459] <==
	I1210 22:26:25.850165       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:26:25.851788       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:26:25.851812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:26:25.852185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 22:26:25.852216       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 22:26:25.855231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 22:26:25.855253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:26:25.855303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:26:25.855430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:26:25.855434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:26:25.855866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:26:25.856103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:26:25.856114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 22:26:25.856285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 22:26:25.856401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 22:26:25.856523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:26:25.856623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:26:25.856683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 22:26:25.856703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 22:26:25.856881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 22:26:25.857096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 22:26:25.857099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 22:26:25.857171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 22:26:25.857172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1210 22:26:27.351924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 22:27:38 addons-713277 kubelet[1285]: I1210 22:27:38.651841    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xz7l5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:27:38 addons-713277 kubelet[1285]: I1210 22:27:38.662870    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-xz7l5" podStartSLOduration=2.449543097 podStartE2EDuration="25.662849992s" podCreationTimestamp="2025-12-10 22:27:13 +0000 UTC" firstStartedPulling="2025-12-10 22:27:14.446157556 +0000 UTC m=+47.112860723" lastFinishedPulling="2025-12-10 22:27:37.659464456 +0000 UTC m=+70.326167618" observedRunningTime="2025-12-10 22:27:38.662608381 +0000 UTC m=+71.329311557" watchObservedRunningTime="2025-12-10 22:27:38.662849992 +0000 UTC m=+71.329553168"
	Dec 10 22:27:39 addons-713277 kubelet[1285]: I1210 22:27:39.656688    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tlfbx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:27:39 addons-713277 kubelet[1285]: I1210 22:27:39.657924    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xz7l5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:27:39 addons-713277 kubelet[1285]: I1210 22:27:39.668373    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-tlfbx" podStartSLOduration=0.73762516 podStartE2EDuration="25.668353997s" podCreationTimestamp="2025-12-10 22:27:14 +0000 UTC" firstStartedPulling="2025-12-10 22:27:14.519296477 +0000 UTC m=+47.185999634" lastFinishedPulling="2025-12-10 22:27:39.450025303 +0000 UTC m=+72.116728471" observedRunningTime="2025-12-10 22:27:39.667829618 +0000 UTC m=+72.334532806" watchObservedRunningTime="2025-12-10 22:27:39.668353997 +0000 UTC m=+72.335057173"
	Dec 10 22:27:40 addons-713277 kubelet[1285]: I1210 22:27:40.412367    1285 scope.go:117] "RemoveContainer" containerID="09cf540b86c37c50095b439eed54a94761fa0d7faf0308d7aa4b9653ad940ee1"
	Dec 10 22:27:40 addons-713277 kubelet[1285]: I1210 22:27:40.661229    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tlfbx" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:27:43 addons-713277 kubelet[1285]: I1210 22:27:43.676792    1285 scope.go:117] "RemoveContainer" containerID="09cf540b86c37c50095b439eed54a94761fa0d7faf0308d7aa4b9653ad940ee1"
	Dec 10 22:27:43 addons-713277 kubelet[1285]: I1210 22:27:43.710977    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-f4mfr" podStartSLOduration=55.913505764 podStartE2EDuration="1m8.710955943s" podCreationTimestamp="2025-12-10 22:26:35 +0000 UTC" firstStartedPulling="2025-12-10 22:27:30.077784625 +0000 UTC m=+62.744487788" lastFinishedPulling="2025-12-10 22:27:42.875234794 +0000 UTC m=+75.541937967" observedRunningTime="2025-12-10 22:27:43.690872072 +0000 UTC m=+76.357575247" watchObservedRunningTime="2025-12-10 22:27:43.710955943 +0000 UTC m=+76.377659120"
	Dec 10 22:27:45 addons-713277 kubelet[1285]: I1210 22:27:45.628578    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmx8g\" (UniqueName: \"kubernetes.io/projected/64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34-kube-api-access-vmx8g\") pod \"64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34\" (UID: \"64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34\") "
	Dec 10 22:27:45 addons-713277 kubelet[1285]: I1210 22:27:45.630810    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34-kube-api-access-vmx8g" (OuterVolumeSpecName: "kube-api-access-vmx8g") pod "64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34" (UID: "64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34"). InnerVolumeSpecName "kube-api-access-vmx8g". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 22:27:45 addons-713277 kubelet[1285]: I1210 22:27:45.685271    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2969902d131d38a19ca3b1f048dbc028a978d205fb71155aa8ffc83705ae9965"
	Dec 10 22:27:45 addons-713277 kubelet[1285]: I1210 22:27:45.730185    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmx8g\" (UniqueName: \"kubernetes.io/projected/64b29a03-5d0d-4ea7-b29f-2bb1fa16dc34-kube-api-access-vmx8g\") on node \"addons-713277\" DevicePath \"\""
	Dec 10 22:27:45 addons-713277 kubelet[1285]: E1210 22:27:45.831589    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 22:27:45 addons-713277 kubelet[1285]: E1210 22:27:45.831687    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fd0837e-3f9d-4230-9f13-bc89297e4d0e-gcr-creds podName:0fd0837e-3f9d-4230-9f13-bc89297e4d0e nodeName:}" failed. No retries permitted until 2025-12-10 22:28:17.831667837 +0000 UTC m=+110.498371010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0fd0837e-3f9d-4230-9f13-bc89297e4d0e-gcr-creds") pod "registry-creds-764b6fb674-dkzdq" (UID: "0fd0837e-3f9d-4230-9f13-bc89297e4d0e") : secret "registry-creds-gcr" not found
	Dec 10 22:27:46 addons-713277 kubelet[1285]: I1210 22:27:46.706768    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-9zvtj" podStartSLOduration=66.496716464 podStartE2EDuration="1m12.706748312s" podCreationTimestamp="2025-12-10 22:26:34 +0000 UTC" firstStartedPulling="2025-12-10 22:27:39.4262131 +0000 UTC m=+72.092916260" lastFinishedPulling="2025-12-10 22:27:45.636244942 +0000 UTC m=+78.302948108" observedRunningTime="2025-12-10 22:27:46.705755069 +0000 UTC m=+79.372458245" watchObservedRunningTime="2025-12-10 22:27:46.706748312 +0000 UTC m=+79.373451490"
	Dec 10 22:27:47 addons-713277 kubelet[1285]: I1210 22:27:47.468904    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 10 22:27:47 addons-713277 kubelet[1285]: I1210 22:27:47.468961    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 10 22:27:48 addons-713277 kubelet[1285]: I1210 22:27:48.716453    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-xcp2p" podStartSLOduration=65.962596244 podStartE2EDuration="1m7.716434769s" podCreationTimestamp="2025-12-10 22:26:41 +0000 UTC" firstStartedPulling="2025-12-10 22:27:46.393098018 +0000 UTC m=+79.059801186" lastFinishedPulling="2025-12-10 22:27:48.146936553 +0000 UTC m=+80.813639711" observedRunningTime="2025-12-10 22:27:48.715775543 +0000 UTC m=+81.382478719" watchObservedRunningTime="2025-12-10 22:27:48.716434769 +0000 UTC m=+81.383137946"
	Dec 10 22:27:51 addons-713277 kubelet[1285]: I1210 22:27:51.734784    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-hswm7" podStartSLOduration=2.13779343 podStartE2EDuration="38.734764192s" podCreationTimestamp="2025-12-10 22:27:13 +0000 UTC" firstStartedPulling="2025-12-10 22:27:14.437935048 +0000 UTC m=+47.104638219" lastFinishedPulling="2025-12-10 22:27:51.034905826 +0000 UTC m=+83.701608981" observedRunningTime="2025-12-10 22:27:51.733952226 +0000 UTC m=+84.400655436" watchObservedRunningTime="2025-12-10 22:27:51.734764192 +0000 UTC m=+84.401467369"
	Dec 10 22:27:54 addons-713277 kubelet[1285]: I1210 22:27:54.188294    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbk6\" (UniqueName: \"kubernetes.io/projected/11388322-7f1e-4c85-84e7-f8e3566769a7-kube-api-access-kwbk6\") pod \"busybox\" (UID: \"11388322-7f1e-4c85-84e7-f8e3566769a7\") " pod="default/busybox"
	Dec 10 22:27:54 addons-713277 kubelet[1285]: I1210 22:27:54.188348    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/11388322-7f1e-4c85-84e7-f8e3566769a7-gcp-creds\") pod \"busybox\" (UID: \"11388322-7f1e-4c85-84e7-f8e3566769a7\") " pod="default/busybox"
	Dec 10 22:27:56 addons-713277 kubelet[1285]: I1210 22:27:56.755334    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.310345261 podStartE2EDuration="2.755313653s" podCreationTimestamp="2025-12-10 22:27:54 +0000 UTC" firstStartedPulling="2025-12-10 22:27:54.508748794 +0000 UTC m=+87.175451952" lastFinishedPulling="2025-12-10 22:27:55.953717178 +0000 UTC m=+88.620420344" observedRunningTime="2025-12-10 22:27:56.754350039 +0000 UTC m=+89.421053215" watchObservedRunningTime="2025-12-10 22:27:56.755313653 +0000 UTC m=+89.422016829"
	Dec 10 22:28:03 addons-713277 kubelet[1285]: I1210 22:28:03.414574    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f80933-ab5b-4dbc-a82f-d625b5964941" path="/var/lib/kubelet/pods/f9f80933-ab5b-4dbc-a82f-d625b5964941/volumes"
	Dec 10 22:28:03 addons-713277 kubelet[1285]: E1210 22:28:03.468602    1285 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42050->127.0.0.1:46755: write tcp 127.0.0.1:42050->127.0.0.1:46755: write: broken pipe
	
	
	==> storage-provisioner [1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508] <==
	W1210 22:27:40.850954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:42.857089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:42.860966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:44.919476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:44.923789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:46.927039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:46.931807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:48.935093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:48.938936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:50.941638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:50.948748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:52.952553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:52.957298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:54.960201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:54.966870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:56.969768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:56.973154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:58.975807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:27:58.979737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:00.982815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:00.986331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:02.989757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:02.994949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:04.998131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:28:05.002021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-713277 -n addons-713277
helpers_test.go:270: (dbg) Run:  kubectl --context addons-713277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s registry-creds-764b6fb674-dkzdq
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s registry-creds-764b6fb674-dkzdq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s registry-creds-764b6fb674-dkzdq: exit status 1 (59.365351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8t8lk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5hp7s" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dkzdq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-713277 describe pod ingress-nginx-admission-create-8t8lk ingress-nginx-admission-patch-5hp7s registry-creds-764b6fb674-dkzdq: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable headlamp --alsologtostderr -v=1: exit status 11 (238.138267ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:06.023116   19394 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:06.023238   19394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:06.023246   19394 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:06.023251   19394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:06.023459   19394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:06.023725   19394 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:06.024103   19394 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:06.024120   19394 addons.go:622] checking whether the cluster is paused
	I1210 22:28:06.024214   19394 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:06.024226   19394 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:06.024625   19394 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:06.042383   19394 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:06.042434   19394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:06.059768   19394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:06.155396   19394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:06.155472   19394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:06.183513   19394 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:06.183536   19394 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:06.183542   19394 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:06.183548   19394 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:06.183553   19394 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:06.183558   19394 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:06.183561   19394 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:06.183565   19394 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:06.183568   19394 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:06.183574   19394 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:06.183577   19394 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:06.183580   19394 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:06.183583   19394 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:06.183586   19394 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:06.183589   19394 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:06.183597   19394 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:06.183600   19394 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:06.183604   19394 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:06.183607   19394 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:06.183610   19394 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:06.183617   19394 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:06.183620   19394 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:06.183625   19394 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:06.183628   19394 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:06.183630   19394 cri.go:89] found id: ""
	I1210 22:28:06.183711   19394 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:06.197669   19394 out.go:203] 
	W1210 22:28:06.198811   19394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:06.198829   19394 out.go:285] * 
	* 
	W1210 22:28:06.201792   19394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:06.202928   19394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-lw7mn" [75c37bc8-16b6-4ed1-be72-85e30e22a384] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00355704s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.375465ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:27.435616   21998 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:27.435790   21998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:27.435800   21998 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:27.435804   21998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:27.435984   21998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:27.436261   21998 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:27.436637   21998 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:27.436668   21998 addons.go:622] checking whether the cluster is paused
	I1210 22:28:27.436771   21998 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:27.436786   21998 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:27.437211   21998 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:27.459587   21998 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:27.459660   21998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:27.481393   21998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:27.576377   21998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:27.576476   21998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:27.607061   21998 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:28:27.607093   21998 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:27.607100   21998 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:27.607105   21998 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:27.607109   21998 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:27.607116   21998 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:27.607120   21998 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:27.607125   21998 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:27.607135   21998 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:27.607149   21998 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:27.607157   21998 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:27.607163   21998 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:27.607170   21998 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:27.607175   21998 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:27.607183   21998 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:27.607190   21998 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:27.607198   21998 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:27.607205   21998 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:27.607209   21998 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:27.607214   21998 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:27.607221   21998 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:27.607225   21998 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:27.607266   21998 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:27.607277   21998 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:27.607285   21998 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:27.607290   21998 cri.go:89] found id: ""
	I1210 22:28:27.607342   21998 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:27.623194   21998 out.go:203] 
	W1210 22:28:27.624490   21998 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:27.624507   21998 out.go:285] * 
	* 
	W1210 22:28:27.627450   21998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:27.628628   21998 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-713277 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-713277 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [412cc88c-6cd7-4398-af43-e39d6d34cd4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [412cc88c-6cd7-4398-af43-e39d6d34cd4a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [412cc88c-6cd7-4398-af43-e39d6d34cd4a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003293595s
addons_test.go:969: (dbg) Run:  kubectl --context addons-713277 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 ssh "cat /opt/local-path-provisioner/pvc-35088769-b195-4bfe-be10-c3ca9b48e87f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-713277 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-713277 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (241.08376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:25.217155   21799 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:25.217460   21799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:25.217471   21799 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:25.217478   21799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:25.217700   21799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:25.217983   21799 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:25.218334   21799 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:25.218359   21799 addons.go:622] checking whether the cluster is paused
	I1210 22:28:25.218464   21799 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:25.218480   21799 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:25.218922   21799 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:25.237303   21799 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:25.237374   21799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:25.255528   21799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:25.350136   21799 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:25.350214   21799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:25.378632   21799 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:28:25.378679   21799 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:25.378686   21799 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:25.378692   21799 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:25.378696   21799 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:25.378700   21799 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:25.378703   21799 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:25.378706   21799 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:25.378709   21799 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:25.378717   21799 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:25.378723   21799 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:25.378726   21799 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:25.378729   21799 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:25.378732   21799 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:25.378735   21799 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:25.378760   21799 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:25.378772   21799 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:25.378778   21799 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:25.378783   21799 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:25.378787   21799 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:25.378793   21799 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:25.378797   21799 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:25.378802   21799 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:25.378806   21799 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:25.378814   21799 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:25.378818   21799 cri.go:89] found id: ""
	I1210 22:28:25.378865   21799 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:25.392765   21799 out.go:203] 
	W1210 22:28:25.394213   21799 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:25.394235   21799 out.go:285] * 
	* 
	W1210 22:28:25.397277   21799 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:25.399096   21799 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-xz7l5" [f7961e70-c5f8-46af-9d26-18d2bafe968d] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004210444s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (263.655394ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:16.914784   20805 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:16.914943   20805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:16.914958   20805 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:16.914966   20805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:16.915224   20805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:16.915515   20805 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:16.915993   20805 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:16.916015   20805 addons.go:622] checking whether the cluster is paused
	I1210 22:28:16.916161   20805 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:16.916189   20805 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:16.916578   20805 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:16.937361   20805 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:16.937448   20805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:16.956824   20805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:17.056204   20805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:17.056277   20805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:17.090523   20805 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:17.090560   20805 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:17.090566   20805 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:17.090572   20805 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:17.090576   20805 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:17.090582   20805 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:17.090586   20805 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:17.090591   20805 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:17.090595   20805 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:17.090613   20805 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:17.090622   20805 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:17.090626   20805 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:17.090633   20805 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:17.090637   20805 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:17.090670   20805 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:17.090690   20805 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:17.090701   20805 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:17.090708   20805 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:17.090712   20805 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:17.090715   20805 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:17.090723   20805 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:17.090730   20805 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:17.090735   20805 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:17.090742   20805 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:17.090747   20805 cri.go:89] found id: ""
	I1210 22:28:17.090804   20805 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:17.105525   20805 out.go:203] 
	W1210 22:28:17.106828   20805 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:17.106856   20805 out.go:285] * 
	* 
	W1210 22:28:17.110757   20805 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:17.112300   20805 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-7ccv7" [cfb82d37-2225-4525-a1d3-45d422009f1d] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003861056s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable yakd --alsologtostderr -v=1: exit status 11 (246.10057ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:22.175521   21485 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:22.175674   21485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:22.175683   21485 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:22.175687   21485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:22.175888   21485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:22.176110   21485 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:22.176430   21485 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:22.176442   21485 addons.go:622] checking whether the cluster is paused
	I1210 22:28:22.176525   21485 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:22.176536   21485 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:22.176961   21485 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:22.195514   21485 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:22.195569   21485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:22.212972   21485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:22.309045   21485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:22.309166   21485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:22.342764   21485 cri.go:89] found id: "c2be865d75697f635e4fe6887e53a50e06d6ca46ff6d9a44248ce80faf853363"
	I1210 22:28:22.342782   21485 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:22.342786   21485 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:22.342789   21485 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:22.342792   21485 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:22.342795   21485 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:22.342798   21485 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:22.342800   21485 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:22.342864   21485 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:22.342875   21485 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:22.342878   21485 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:22.342881   21485 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:22.342884   21485 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:22.342887   21485 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:22.342890   21485 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:22.342898   21485 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:22.342904   21485 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:22.342930   21485 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:22.342939   21485 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:22.342945   21485 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:22.342950   21485 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:22.342953   21485 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:22.342955   21485 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:22.342958   21485 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:22.342961   21485 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:22.342963   21485 cri.go:89] found id: ""
	I1210 22:28:22.343007   21485 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:22.358534   21485 out.go:203] 
	W1210 22:28:22.359723   21485 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:22.359748   21485 out.go:285] * 
	* 
	W1210 22:28:22.362693   21485 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:22.363958   21485 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-9zlkh" [56f9b548-0a4e-4da0-8a01-cb3038bb1d42] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003122323s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-713277 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-713277 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (254.497113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:28:08.773903   19460 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:28:08.774193   19460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:08.774204   19460 out.go:374] Setting ErrFile to fd 2...
	I1210 22:28:08.774209   19460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:28:08.774425   19460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:28:08.774729   19460 mustload.go:66] Loading cluster: addons-713277
	I1210 22:28:08.775160   19460 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:08.775179   19460 addons.go:622] checking whether the cluster is paused
	I1210 22:28:08.775310   19460 config.go:182] Loaded profile config "addons-713277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:28:08.775327   19460 host.go:66] Checking if "addons-713277" exists ...
	I1210 22:28:08.775792   19460 cli_runner.go:164] Run: docker container inspect addons-713277 --format={{.State.Status}}
	I1210 22:28:08.796062   19460 ssh_runner.go:195] Run: systemctl --version
	I1210 22:28:08.796127   19460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-713277
	I1210 22:28:08.815683   19460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/addons-713277/id_rsa Username:docker}
	I1210 22:28:08.913457   19460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:28:08.913551   19460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:28:08.942617   19460 cri.go:89] found id: "4c9bba5f39f38f9ee45c8cfddcc100f4a1bb11de9bb5b350d1900ba4d7c56184"
	I1210 22:28:08.942654   19460 cri.go:89] found id: "74c081e28286c1f6c26ecc5e635be59ee827976f0b0c4dc75566010f84874c34"
	I1210 22:28:08.942660   19460 cri.go:89] found id: "48438e9e3f252d3bb1e219849f93d341691d6df8aab408f3bc2678ddf603aa30"
	I1210 22:28:08.942666   19460 cri.go:89] found id: "2163a8cf9861c3496986512282be2aa3e088474043c6ae129c2626080f521607"
	I1210 22:28:08.942672   19460 cri.go:89] found id: "2450e25bed0154dc5f1050513c113818140732c0f7e2c0bb163162334ebfdda2"
	I1210 22:28:08.942677   19460 cri.go:89] found id: "165ba560b21cee04f1995c36df46b1529b6041f332f95b6f02ebfaeebe2a0299"
	I1210 22:28:08.942682   19460 cri.go:89] found id: "28bfb1217531d8af3a90d647dc78c05584ddfbac20608a9c5c73e505b0e835a8"
	I1210 22:28:08.942687   19460 cri.go:89] found id: "8fc592d7667dfe4dd9417f007868464ed09d11577eb69cbe09242eae67af72b7"
	I1210 22:28:08.942691   19460 cri.go:89] found id: "1db25aab3edc4d40dc8c5c665a1852eec4b94568382f28fd6b1a35627508479e"
	I1210 22:28:08.942699   19460 cri.go:89] found id: "04ea9bfa0bce42a05e3464b80b1e44222eec7d7563668dcdc9b90cad26317bb6"
	I1210 22:28:08.942707   19460 cri.go:89] found id: "6ae10e6bd3d4309e8f295fa3aa734507939d23e1ec24971ebddbc7024eb426af"
	I1210 22:28:08.942712   19460 cri.go:89] found id: "979e705cc319207798b936a921be83af18d8a107d0bee76932d97163c8abbaa9"
	I1210 22:28:08.942717   19460 cri.go:89] found id: "079244ec7bd48db4d4160cc6ee0d8cf43ab4c20f3975545d819a216a417207eb"
	I1210 22:28:08.942720   19460 cri.go:89] found id: "d3ada68a097bae099ca9da2d216d84a54c153b30df414bd1bb647f57d2ae5108"
	I1210 22:28:08.942723   19460 cri.go:89] found id: "bb607f8a94b3943de0377d477fe22d9d71ff6e29b2300b7af4f512732822741c"
	I1210 22:28:08.942731   19460 cri.go:89] found id: "b4b4d4119a9e0fd207cf6f53f67d5b2c0e20850f612a398d3e4cb6e39de5b3f3"
	I1210 22:28:08.942737   19460 cri.go:89] found id: "32ba87316889a4fce52884acfa47794f66bb88f767521646dd0fe183c2208cca"
	I1210 22:28:08.942742   19460 cri.go:89] found id: "1823e7451c0fa70f394c3b82960c1b2f581f7e9f25d5211ce7d5f35f05189508"
	I1210 22:28:08.942744   19460 cri.go:89] found id: "23b97f2410dd16ec2ddabb1e963884d395fb3322a91f55f9ff1ff71590f05a36"
	I1210 22:28:08.942747   19460 cri.go:89] found id: "bef4905bf4d2818b6d5dfd4222750eceab77a231b955dfcb33e3fd90c7d5e2fc"
	I1210 22:28:08.942750   19460 cri.go:89] found id: "41f1ac5834be0fe2d29f54c187a8ecf39b0f8eb1be351817606ba91c48b76459"
	I1210 22:28:08.942752   19460 cri.go:89] found id: "a19c2cf65ed7ffc93c97dec33472e0068af0fb9bacfd9641bb69b3c9b3c8f49b"
	I1210 22:28:08.942755   19460 cri.go:89] found id: "5f60ada2aeca2ebb5cb1f8a0b7088ef6d3a19ce295472c4bda6130c4e706c2ef"
	I1210 22:28:08.942757   19460 cri.go:89] found id: "7e9f40ca0ad080db2d7805f527c659d1b887225dd4f4e807d12d5fb59d3ff326"
	I1210 22:28:08.942760   19460 cri.go:89] found id: ""
	I1210 22:28:08.942798   19460 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 22:28:08.957448   19460 out.go:203] 
	W1210 22:28:08.958731   19460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:28:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 22:28:08.958751   19460 out.go:285] * 
	* 
	W1210 22:28:08.961675   19460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 22:28:08.963219   19460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-713277 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image ls --format short --alsologtostderr: (2.364808917s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174200 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174200 image ls --format short --alsologtostderr:
I1210 22:36:46.830729   67621 out.go:360] Setting OutFile to fd 1 ...
I1210 22:36:46.831037   67621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:46.831044   67621 out.go:374] Setting ErrFile to fd 2...
I1210 22:36:46.831051   67621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:46.831325   67621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:36:46.832035   67621 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:46.832192   67621 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:46.832830   67621 cli_runner.go:164] Run: docker container inspect functional-174200 --format={{.State.Status}}
I1210 22:36:46.858997   67621 ssh_runner.go:195] Run: systemctl --version
I1210 22:36:46.859075   67621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-174200
I1210 22:36:46.883718   67621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-174200/id_rsa Username:docker}
I1210 22:36:46.993490   67621 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:36:49.025925   67621 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.032389069s)
W1210 22:36:49.026003   67621 cache_images.go:736] Failed to list images for profile functional-174200 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1210 22:36:49.023174    7240 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-10T22:36:49Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image ls --format json --alsologtostderr: (2.295204042s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174200 image ls --format json --alsologtostderr:
[]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174200 image ls --format json --alsologtostderr:
I1210 22:36:49.204474   68080 out.go:360] Setting OutFile to fd 1 ...
I1210 22:36:49.204824   68080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.204839   68080 out.go:374] Setting ErrFile to fd 2...
I1210 22:36:49.204846   68080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.205133   68080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:36:49.205927   68080 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:49.206080   68080 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:49.206750   68080 cli_runner.go:164] Run: docker container inspect functional-174200 --format={{.State.Status}}
I1210 22:36:49.234489   68080 ssh_runner.go:195] Run: systemctl --version
I1210 22:36:49.234553   68080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-174200
I1210 22:36:49.265269   68080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-174200/id_rsa Username:docker}
I1210 22:36:49.375147   68080 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:36:51.404779   68080 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.029581612s)
W1210 22:36:51.404869   68080 cache_images.go:736] Failed to list images for profile functional-174200 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1210 22:36:51.402176    7424 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-10T22:36:51Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image ls --format yaml --alsologtostderr: (2.281803206s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174200 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174200 image ls --format yaml --alsologtostderr:
I1210 22:36:47.722032   67932 out.go:360] Setting OutFile to fd 1 ...
I1210 22:36:47.722290   67932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:47.722328   67932 out.go:374] Setting ErrFile to fd 2...
I1210 22:36:47.722345   67932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:47.722615   67932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:36:47.723318   67932 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:47.723493   67932 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:47.724112   67932 cli_runner.go:164] Run: docker container inspect functional-174200 --format={{.State.Status}}
I1210 22:36:47.745940   67932 ssh_runner.go:195] Run: systemctl --version
I1210 22:36:47.746006   67932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-174200
I1210 22:36:47.769326   67932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-174200/id_rsa Username:docker}
I1210 22:36:47.878050   67932 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:36:49.911673   67932 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.033574144s)
W1210 22:36:49.911756   67932 cache_images.go:736] Failed to list images for profile functional-174200 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1210 22:36:49.909208    7362 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-10T22:36:49Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (2.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-283371 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-283371 --output=json --user=testUser: exit status 80 (1.684065367s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"62f589ab-49d7-4acf-b573-5220a6cf2487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-283371 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"33966ca8-7e2c-4861-8e5b-d4db2cb7c460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T22:46:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"52b44d62-cc67-4247-8232-3543f8b64829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-283371 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-283371 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-283371 --output=json --user=testUser: exit status 80 (1.674004417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a59eda2-18f5-463b-a4d6-205c8b67e420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-283371 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"589093c8-51d4-4107-aaf0-bcf1be1f413a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T22:46:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"7b2875fa-f44d-469a-9d0e-baa75dbefb11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-283371 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.67s)

                                                
                                    
x
+
TestPause/serial/Pause (5.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-615194 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-615194 --alsologtostderr -v=5: exit status 80 (2.37799548s)

                                                
                                                
-- stdout --
	* Pausing node pause-615194 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:00:20.412260  207038 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:00:20.412513  207038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:20.412522  207038 out.go:374] Setting ErrFile to fd 2...
	I1210 23:00:20.412526  207038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:20.412783  207038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:00:20.413019  207038 out.go:368] Setting JSON to false
	I1210 23:00:20.413038  207038 mustload.go:66] Loading cluster: pause-615194
	I1210 23:00:20.413418  207038 config.go:182] Loaded profile config "pause-615194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:20.413828  207038 cli_runner.go:164] Run: docker container inspect pause-615194 --format={{.State.Status}}
	I1210 23:00:20.432924  207038 host.go:66] Checking if "pause-615194" exists ...
	I1210 23:00:20.433205  207038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:00:20.494439  207038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 23:00:20.48503489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:00:20.495078  207038 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:pause-615194 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=false) vm-driv
er: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:00:20.677142  207038 out.go:179] * Pausing node pause-615194 ... 
	I1210 23:00:20.724920  207038 host.go:66] Checking if "pause-615194" exists ...
	I1210 23:00:20.725241  207038 ssh_runner.go:195] Run: systemctl --version
	I1210 23:00:20.725287  207038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-615194
	I1210 23:00:20.745007  207038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/pause-615194/id_rsa Username:docker}
	I1210 23:00:20.842534  207038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:20.856001  207038 pause.go:52] kubelet running: true
	I1210 23:00:20.856070  207038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:00:20.986705  207038 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:00:20.986821  207038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:00:21.054791  207038 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:21.054815  207038 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:21.054820  207038 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:21.054823  207038 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:21.054826  207038 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:21.054830  207038 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:21.054832  207038 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:21.054835  207038 cri.go:89] found id: ""
	I1210 23:00:21.054872  207038 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:00:21.067348  207038 retry.go:31] will retry after 239.002861ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:21Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:00:21.306882  207038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:21.320079  207038 pause.go:52] kubelet running: false
	I1210 23:00:21.320141  207038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:00:21.433225  207038 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:00:21.433288  207038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:00:21.500676  207038 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:21.500703  207038 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:21.500709  207038 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:21.500714  207038 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:21.500717  207038 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:21.500720  207038 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:21.500723  207038 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:21.500725  207038 cri.go:89] found id: ""
	I1210 23:00:21.500764  207038 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:00:21.513211  207038 retry.go:31] will retry after 314.305287ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:21Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:00:21.828509  207038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:21.843450  207038 pause.go:52] kubelet running: false
	I1210 23:00:21.843512  207038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:00:21.978291  207038 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:00:21.978372  207038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:00:22.056702  207038 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:22.056730  207038 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:22.056735  207038 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:22.056740  207038 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:22.056745  207038 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:22.056750  207038 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:22.056754  207038 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:22.056758  207038 cri.go:89] found id: ""
	I1210 23:00:22.056822  207038 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:00:22.070067  207038 retry.go:31] will retry after 309.988666ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:22Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:00:22.380694  207038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:22.404191  207038 pause.go:52] kubelet running: false
	I1210 23:00:22.404265  207038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:00:22.610772  207038 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:00:22.610865  207038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:00:22.702246  207038 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:22.702272  207038 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:22.702278  207038 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:22.702283  207038 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:22.702287  207038 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:22.702291  207038 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:22.702305  207038 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:22.702310  207038 cri.go:89] found id: ""
	I1210 23:00:22.702360  207038 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:00:22.719161  207038 out.go:203] 
	W1210 23:00:22.720745  207038 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:00:22.720775  207038 out.go:285] * 
	* 
	W1210 23:00:22.726380  207038 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:00:22.728908  207038 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-615194 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-615194
helpers_test.go:244: (dbg) docker inspect pause-615194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788",
	        "Created": "2025-12-10T22:59:31.338358415Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T22:59:31.399378281Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/hosts",
	        "LogPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788-json.log",
	        "Name": "/pause-615194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-615194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-615194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788",
	                "LowerDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-615194",
	                "Source": "/var/lib/docker/volumes/pause-615194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-615194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-615194",
	                "name.minikube.sigs.k8s.io": "pause-615194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a16c5a3e9abd3177fef3b2ebd898a2378e1f3635da2db42eaa1948a9c672fc10",
	            "SandboxKey": "/var/run/docker/netns/a16c5a3e9abd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-615194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4aa16abc5ae09604a9dd56dacc0bd354963dea1684cd40a2b6eac93b07601e7",
	                    "EndpointID": "f7188886908a418ef82e58a2255b4a75ea1416e8a0997fd9884eae9a6fecca08",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9a:e0:46:00:30:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-615194",
	                        "4e71e18d172a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-615194 -n pause-615194
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-615194 -n pause-615194: exit status 2 (371.811635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-615194 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-293031                                                                                                                   │ test-preload-293031         │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ start   │ -p scheduled-stop-230539 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --cancel-scheduled                                                                                              │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │ 10 Dec 25 22:58 UTC │
	│ delete  │ -p scheduled-stop-230539                                                                                                                 │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p insufficient-storage-351646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-351646 │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │                     │
	│ delete  │ -p insufficient-storage-351646                                                                                                           │ insufficient-storage-351646 │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p offline-crio-615390 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-615390         │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p cert-expiration-669067 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-669067      │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p pause-615194 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p force-systemd-env-634162 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-634162    │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ delete  │ -p force-systemd-env-634162                                                                                                              │ force-systemd-env-634162    │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p missing-upgrade-628477 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-628477      │ jenkins │ v1.35.0 │ 10 Dec 25 22:59 UTC │                     │
	│ delete  │ -p offline-crio-615390                                                                                                                   │ offline-crio-615390         │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p pause-615194 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-000011   │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │                     │
	│ pause   │ -p pause-615194 --alsologtostderr -v=5                                                                                                   │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:00:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:00:14.468965  205784 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:00:14.469257  205784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:14.469301  205784 out.go:374] Setting ErrFile to fd 2...
	I1210 23:00:14.469312  205784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:14.469611  205784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:00:14.470191  205784 out.go:368] Setting JSON to false
	I1210 23:00:14.471354  205784 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2556,"bootTime":1765405058,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:00:14.471420  205784 start.go:143] virtualization: kvm guest
	I1210 23:00:14.473465  205784 out.go:179] * [kubernetes-upgrade-000011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:00:14.474945  205784 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:00:14.474957  205784 notify.go:221] Checking for updates...
	I1210 23:00:14.478126  205784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:00:14.479819  205784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:00:14.484357  205784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:00:14.485839  205784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:00:14.487203  205784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:00:14.489196  205784 config.go:182] Loaded profile config "cert-expiration-669067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:14.489354  205784 config.go:182] Loaded profile config "missing-upgrade-628477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:00:14.489564  205784 config.go:182] Loaded profile config "pause-615194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:14.489703  205784 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:00:14.518656  205784 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:00:14.518780  205784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:00:14.586297  205784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 23:00:14.574283467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:00:14.586453  205784 docker.go:319] overlay module found
	I1210 23:00:14.589715  205784 out.go:179] * Using the docker driver based on user configuration
	I1210 23:00:14.591204  205784 start.go:309] selected driver: docker
	I1210 23:00:14.591222  205784 start.go:927] validating driver "docker" against <nil>
	I1210 23:00:14.591234  205784 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:00:14.591869  205784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:00:14.656414  205784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 23:00:14.645850905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:00:14.656587  205784 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:00:14.656874  205784 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 23:00:14.659862  205784 out.go:179] * Using Docker driver with root privileges
	I1210 23:00:14.661579  205784 cni.go:84] Creating CNI manager for ""
	I1210 23:00:14.661665  205784 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:00:14.661681  205784 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:00:14.661773  205784 start.go:353] cluster config:
	{Name:kubernetes-upgrade-000011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-000011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:00:14.667236  205784 out.go:179] * Starting "kubernetes-upgrade-000011" primary control-plane node in "kubernetes-upgrade-000011" cluster
	I1210 23:00:14.668929  205784 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:00:14.670665  205784 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:00:14.341991  203885 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:00:14.342316  203885 start.go:159] libmachine.API.Create for "missing-upgrade-628477" (driver="docker")
	I1210 23:00:14.342354  203885 client.go:168] LocalClient.Create starting
	I1210 23:00:14.342426  203885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:00:14.342467  203885 main.go:141] libmachine: Decoding PEM data...
	I1210 23:00:14.342482  203885 main.go:141] libmachine: Parsing certificate...
	I1210 23:00:14.342573  203885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:00:14.342601  203885 main.go:141] libmachine: Decoding PEM data...
	I1210 23:00:14.342612  203885 main.go:141] libmachine: Parsing certificate...
	I1210 23:00:14.343086  203885 cli_runner.go:164] Run: docker network inspect missing-upgrade-628477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:00:14.363272  203885 cli_runner.go:211] docker network inspect missing-upgrade-628477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:00:14.363343  203885 network_create.go:284] running [docker network inspect missing-upgrade-628477] to gather additional debugging logs...
	I1210 23:00:14.363362  203885 cli_runner.go:164] Run: docker network inspect missing-upgrade-628477
	W1210 23:00:14.384344  203885 cli_runner.go:211] docker network inspect missing-upgrade-628477 returned with exit code 1
	I1210 23:00:14.384364  203885 network_create.go:287] error running [docker network inspect missing-upgrade-628477]: docker network inspect missing-upgrade-628477: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-628477 not found
	I1210 23:00:14.384383  203885 network_create.go:289] output of [docker network inspect missing-upgrade-628477]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-628477 not found
	
	** /stderr **
	I1210 23:00:14.384474  203885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:14.402519  203885 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:00:14.403198  203885 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:00:14.403846  203885 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:00:14.404580  203885 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c4aa16abc5ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:5d:47:86:ab:fe} reservation:<nil>}
	I1210 23:00:14.405421  203885 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-005ef8a21e87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:46:0a:99:96:13} reservation:<nil>}
	I1210 23:00:14.406216  203885 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b82af0}
	I1210 23:00:14.406236  203885 network_create.go:124] attempt to create docker network missing-upgrade-628477 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:00:14.406290  203885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-628477 missing-upgrade-628477
	I1210 23:00:14.464564  203885 network_create.go:108] docker network missing-upgrade-628477 192.168.94.0/24 created
	I1210 23:00:14.464597  203885 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-628477" container
	I1210 23:00:14.464705  203885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:00:14.487788  203885 cli_runner.go:164] Run: docker volume create missing-upgrade-628477 --label name.minikube.sigs.k8s.io=missing-upgrade-628477 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:00:14.510617  203885 oci.go:103] Successfully created a docker volume missing-upgrade-628477
	I1210 23:00:14.510745  203885 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-628477-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-628477 --entrypoint /usr/bin/test -v missing-upgrade-628477:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1210 23:00:14.672148  205784 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:00:14.672206  205784 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 23:00:14.672240  205784 cache.go:65] Caching tarball of preloaded images
	I1210 23:00:14.672340  205784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:00:14.672400  205784 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:00:14.672414  205784 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 23:00:14.672564  205784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/config.json ...
	I1210 23:00:14.672598  205784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/config.json: {Name:mkb3fff544a2a99b0b6e4089d74f800ea6496126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:14.701009  205784 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:00:14.701032  205784 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:00:14.701052  205784 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:00:14.701092  205784 start.go:360] acquireMachinesLock for kubernetes-upgrade-000011: {Name:mk3bb2603eb5718897233c9748a9e145f39c334c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:00:14.701214  205784 start.go:364] duration metric: took 96.008µs to acquireMachinesLock for "kubernetes-upgrade-000011"
	I1210 23:00:14.701244  205784 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-000011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-000011 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:00:14.701309  205784 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:00:14.426119  205002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:00:14.433081  205002 fix.go:56] duration metric: took 3.738335473s for fixHost
	I1210 23:00:14.433116  205002 start.go:83] releasing machines lock for "pause-615194", held for 3.738396397s
	I1210 23:00:14.433192  205002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-615194
	I1210 23:00:14.455917  205002 ssh_runner.go:195] Run: cat /version.json
	I1210 23:00:14.455971  205002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-615194
	I1210 23:00:14.456002  205002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:00:14.456066  205002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-615194
	I1210 23:00:14.479255  205002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/pause-615194/id_rsa Username:docker}
	I1210 23:00:14.479382  205002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/pause-615194/id_rsa Username:docker}
	I1210 23:00:14.580694  205002 ssh_runner.go:195] Run: systemctl --version
	I1210 23:00:14.653903  205002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:00:14.699957  205002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:00:14.705590  205002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:00:14.705672  205002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:00:14.714972  205002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:00:14.714998  205002 start.go:496] detecting cgroup driver to use...
	I1210 23:00:14.715030  205002 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:00:14.715214  205002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:00:14.736192  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:00:14.751875  205002 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:00:14.751934  205002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:00:14.770276  205002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:00:14.786506  205002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:00:14.938913  205002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:00:15.073928  205002 docker.go:234] disabling docker service ...
	I1210 23:00:15.073992  205002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:00:15.093963  205002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:00:15.111358  205002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:00:15.257983  205002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:00:15.415895  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:00:15.430344  205002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:00:15.445984  205002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:00:15.446103  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.457317  205002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:00:15.457399  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.468636  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.479075  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.488971  205002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:00:15.499797  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.526465  205002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.536459  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.584108  205002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:00:15.592492  205002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:00:15.600588  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:15.718891  205002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:00:16.297807  205002 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:00:16.297876  205002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:00:16.304212  205002 start.go:564] Will wait 60s for crictl version
	I1210 23:00:16.304276  205002 ssh_runner.go:195] Run: which crictl
	I1210 23:00:16.308475  205002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:00:16.338867  205002 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:00:16.338955  205002 ssh_runner.go:195] Run: crio --version
	I1210 23:00:16.377106  205002 ssh_runner.go:195] Run: crio --version
	I1210 23:00:16.412206  205002 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:00:16.413603  205002 cli_runner.go:164] Run: docker network inspect pause-615194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:16.437447  205002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:00:16.442056  205002 kubeadm.go:884] updating cluster {Name:pause-615194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:00:16.442272  205002 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:00:16.442342  205002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:00:16.478038  205002 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:00:16.478062  205002 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:00:16.478106  205002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:00:16.506778  205002 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:00:16.506806  205002 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:00:16.506815  205002 kubeadm.go:935] updating node { 192.168.76.2  8443 v1.34.2 crio true true} ...
	I1210 23:00:16.506951  205002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-615194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:00:16.507047  205002 ssh_runner.go:195] Run: crio config
	I1210 23:00:16.564384  205002 cni.go:84] Creating CNI manager for ""
	I1210 23:00:16.564430  205002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:00:16.564445  205002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:00:16.564484  205002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-615194 NodeName:pause-615194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:00:16.564673  205002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-615194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:00:16.564761  205002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:00:16.576756  205002 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:00:16.576826  205002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:00:16.586743  205002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 23:00:16.600720  205002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:00:16.616876  205002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1210 23:00:16.631468  205002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:00:16.636753  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:16.788605  205002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:00:16.803112  205002 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194 for IP: 192.168.76.2
	I1210 23:00:16.803136  205002 certs.go:195] generating shared ca certs ...
	I1210 23:00:16.803155  205002 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:16.803369  205002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:00:16.803439  205002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:00:16.803455  205002 certs.go:257] generating profile certs ...
	I1210 23:00:16.803565  205002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key
	I1210 23:00:16.803679  205002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.key.09d18f28
	I1210 23:00:16.803741  205002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.key
	I1210 23:00:16.803917  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:00:16.803970  205002 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:00:16.803984  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:00:16.804023  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:00:16.804070  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:00:16.804105  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:00:16.804172  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:00:16.804903  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:00:16.827281  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:00:16.851001  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:00:16.870582  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:00:16.890187  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:00:16.911067  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:00:16.933296  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:00:16.954294  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:00:16.974519  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:00:16.997775  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:00:17.020874  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:00:17.042624  205002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:00:17.057836  205002 ssh_runner.go:195] Run: openssl version
	I1210 23:00:17.064957  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.073541  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:00:17.082090  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.086633  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.086701  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.129383  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:00:17.137983  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.146803  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:00:17.156224  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.160870  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.160941  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.196400  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:00:17.207006  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.215447  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:00:17.226100  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.231594  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.231672  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.268681  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:00:17.277628  205002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:00:17.282194  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:00:17.332885  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:00:17.371394  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:00:17.418771  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:00:17.468674  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:00:17.515901  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:00:17.566608  205002 kubeadm.go:401] StartCluster: {Name:pause-615194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvi
dia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:00:17.566775  205002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:00:17.566835  205002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:00:17.607295  205002 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:17.607318  205002 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:17.607324  205002 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:17.607329  205002 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:17.607333  205002 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:17.607337  205002 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:17.607339  205002 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:17.607342  205002 cri.go:89] found id: ""
	I1210 23:00:17.607388  205002 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:00:17.623792  205002 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:17Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:00:17.623868  205002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:00:17.634295  205002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:00:17.634316  205002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:00:17.634365  205002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:00:17.644456  205002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:00:17.645122  205002 kubeconfig.go:125] found "pause-615194" server: "https://192.168.76.2:8443"
	I1210 23:00:17.645931  205002 kapi.go:59] client config for pause-615194: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:00:17.646391  205002 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 23:00:17.646417  205002 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 23:00:17.646421  205002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 23:00:17.646428  205002 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 23:00:17.646432  205002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 23:00:17.646848  205002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:00:17.656027  205002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 23:00:17.656069  205002 kubeadm.go:602] duration metric: took 21.746108ms to restartPrimaryControlPlane
	I1210 23:00:17.656080  205002 kubeadm.go:403] duration metric: took 89.487676ms to StartCluster
	I1210 23:00:17.656100  205002 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:17.656189  205002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:00:17.657018  205002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:17.680354  205002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:00:17.680422  205002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:00:17.680677  205002 config.go:182] Loaded profile config "pause-615194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:17.796851  205002 out.go:179] * Verifying Kubernetes components...
	I1210 23:00:17.796888  205002 out.go:179] * Enabled addons: 
	I1210 23:00:17.861944  205002 addons.go:530] duration metric: took 181.497537ms for enable addons: enabled=[]
	I1210 23:00:17.861981  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:17.986945  205002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:00:18.000655  205002 node_ready.go:35] waiting up to 6m0s for node "pause-615194" to be "Ready" ...
	I1210 23:00:18.009739  205002 node_ready.go:49] node "pause-615194" is "Ready"
	I1210 23:00:18.009770  205002 node_ready.go:38] duration metric: took 9.076812ms for node "pause-615194" to be "Ready" ...
	I1210 23:00:18.009787  205002 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:00:18.009843  205002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:00:18.022620  205002 api_server.go:72] duration metric: took 342.20322ms to wait for apiserver process to appear ...
	I1210 23:00:18.022660  205002 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:00:18.022685  205002 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:00:18.027045  205002 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 23:00:18.028046  205002 api_server.go:141] control plane version: v1.34.2
	I1210 23:00:18.028074  205002 api_server.go:131] duration metric: took 5.405155ms to wait for apiserver health ...
	I1210 23:00:18.028085  205002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:00:18.031530  205002 system_pods.go:59] 7 kube-system pods found
	I1210 23:00:18.031562  205002 system_pods.go:61] "coredns-66bc5c9577-rcw4l" [152633ef-75ee-401c-8f62-68ecef534501] Running
	I1210 23:00:18.031570  205002 system_pods.go:61] "etcd-pause-615194" [87151b12-02ab-48f1-9d23-c4e7cc1c86e3] Running
	I1210 23:00:18.031576  205002 system_pods.go:61] "kindnet-7s4fz" [8a7bab97-f5f0-4c87-9b64-567b5c26a5e6] Running
	I1210 23:00:18.031582  205002 system_pods.go:61] "kube-apiserver-pause-615194" [8e329430-7379-4b0f-8ab3-4fb88f1e0f77] Running
	I1210 23:00:18.031589  205002 system_pods.go:61] "kube-controller-manager-pause-615194" [1457b77d-25a2-4901-ae8b-ad4e52efce55] Running
	I1210 23:00:18.031596  205002 system_pods.go:61] "kube-proxy-gg5fh" [afaeee49-2e88-47ff-b34d-e636614ad430] Running
	I1210 23:00:18.031602  205002 system_pods.go:61] "kube-scheduler-pause-615194" [8319e2d9-3d29-468b-ab91-a55655a4d6e9] Running
	I1210 23:00:18.031611  205002 system_pods.go:74] duration metric: took 3.518449ms to wait for pod list to return data ...
	I1210 23:00:18.031625  205002 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:00:18.033859  205002 default_sa.go:45] found service account: "default"
	I1210 23:00:18.033884  205002 default_sa.go:55] duration metric: took 2.251079ms for default service account to be created ...
	I1210 23:00:18.033894  205002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:00:18.036657  205002 system_pods.go:86] 7 kube-system pods found
	I1210 23:00:18.036684  205002 system_pods.go:89] "coredns-66bc5c9577-rcw4l" [152633ef-75ee-401c-8f62-68ecef534501] Running
	I1210 23:00:18.036692  205002 system_pods.go:89] "etcd-pause-615194" [87151b12-02ab-48f1-9d23-c4e7cc1c86e3] Running
	I1210 23:00:18.036698  205002 system_pods.go:89] "kindnet-7s4fz" [8a7bab97-f5f0-4c87-9b64-567b5c26a5e6] Running
	I1210 23:00:18.036703  205002 system_pods.go:89] "kube-apiserver-pause-615194" [8e329430-7379-4b0f-8ab3-4fb88f1e0f77] Running
	I1210 23:00:18.036708  205002 system_pods.go:89] "kube-controller-manager-pause-615194" [1457b77d-25a2-4901-ae8b-ad4e52efce55] Running
	I1210 23:00:18.036714  205002 system_pods.go:89] "kube-proxy-gg5fh" [afaeee49-2e88-47ff-b34d-e636614ad430] Running
	I1210 23:00:18.036719  205002 system_pods.go:89] "kube-scheduler-pause-615194" [8319e2d9-3d29-468b-ab91-a55655a4d6e9] Running
	I1210 23:00:18.036729  205002 system_pods.go:126] duration metric: took 2.827988ms to wait for k8s-apps to be running ...
	I1210 23:00:18.036742  205002 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:00:18.036792  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:18.050630  205002 system_svc.go:56] duration metric: took 13.882227ms WaitForService to wait for kubelet
	I1210 23:00:18.050685  205002 kubeadm.go:587] duration metric: took 370.275106ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:00:18.050710  205002 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:00:18.053331  205002 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:00:18.053361  205002 node_conditions.go:123] node cpu capacity is 8
	I1210 23:00:18.053382  205002 node_conditions.go:105] duration metric: took 2.664842ms to run NodePressure ...
	I1210 23:00:18.053397  205002 start.go:242] waiting for startup goroutines ...
	I1210 23:00:18.053412  205002 start.go:247] waiting for cluster config update ...
	I1210 23:00:18.053426  205002 start.go:256] writing updated cluster config ...
	I1210 23:00:18.107539  205002 ssh_runner.go:195] Run: rm -f paused
	I1210 23:00:18.113252  205002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:00:18.114118  205002 kapi.go:59] client config for pause-615194: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:00:18.117352  205002 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcw4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.122188  205002 pod_ready.go:94] pod "coredns-66bc5c9577-rcw4l" is "Ready"
	I1210 23:00:18.122210  205002 pod_ready.go:86] duration metric: took 4.838463ms for pod "coredns-66bc5c9577-rcw4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.124074  205002 pod_ready.go:83] waiting for pod "etcd-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.128723  205002 pod_ready.go:94] pod "etcd-pause-615194" is "Ready"
	I1210 23:00:18.128751  205002 pod_ready.go:86] duration metric: took 4.645949ms for pod "etcd-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.130951  205002 pod_ready.go:83] waiting for pod "kube-apiserver-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.135291  205002 pod_ready.go:94] pod "kube-apiserver-pause-615194" is "Ready"
	I1210 23:00:18.135313  205002 pod_ready.go:86] duration metric: took 4.338978ms for pod "kube-apiserver-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.137442  205002 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.518400  205002 pod_ready.go:94] pod "kube-controller-manager-pause-615194" is "Ready"
	I1210 23:00:18.518429  205002 pod_ready.go:86] duration metric: took 380.961519ms for pod "kube-controller-manager-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.718028  205002 pod_ready.go:83] waiting for pod "kube-proxy-gg5fh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.118420  205002 pod_ready.go:94] pod "kube-proxy-gg5fh" is "Ready"
	I1210 23:00:19.118447  205002 pod_ready.go:86] duration metric: took 400.388437ms for pod "kube-proxy-gg5fh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.317558  205002 pod_ready.go:83] waiting for pod "kube-scheduler-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:14.703110  205784 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:00:14.703349  205784 start.go:159] libmachine.API.Create for "kubernetes-upgrade-000011" (driver="docker")
	I1210 23:00:14.703385  205784 client.go:173] LocalClient.Create starting
	I1210 23:00:14.703467  205784 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:00:14.703499  205784 main.go:143] libmachine: Decoding PEM data...
	I1210 23:00:14.703517  205784 main.go:143] libmachine: Parsing certificate...
	I1210 23:00:14.703577  205784 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:00:14.703595  205784 main.go:143] libmachine: Decoding PEM data...
	I1210 23:00:14.703607  205784 main.go:143] libmachine: Parsing certificate...
	I1210 23:00:14.703952  205784 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-000011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:00:14.725328  205784 cli_runner.go:211] docker network inspect kubernetes-upgrade-000011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:00:14.725414  205784 network_create.go:284] running [docker network inspect kubernetes-upgrade-000011] to gather additional debugging logs...
	I1210 23:00:14.725439  205784 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-000011
	W1210 23:00:14.747928  205784 cli_runner.go:211] docker network inspect kubernetes-upgrade-000011 returned with exit code 1
	I1210 23:00:14.747955  205784 network_create.go:287] error running [docker network inspect kubernetes-upgrade-000011]: docker network inspect kubernetes-upgrade-000011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-000011 not found
	I1210 23:00:14.747967  205784 network_create.go:289] output of [docker network inspect kubernetes-upgrade-000011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-000011 not found
	
	** /stderr **
	I1210 23:00:14.748091  205784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:14.767136  205784 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:00:14.767788  205784 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:00:14.768473  205784 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:00:14.769166  205784 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c4aa16abc5ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:5d:47:86:ab:fe} reservation:<nil>}
	I1210 23:00:14.770032  205784 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-005ef8a21e87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:46:0a:99:96:13} reservation:<nil>}
	I1210 23:00:14.770888  205784 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-59c76f53fda0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:1a:60:ec:7a:83:f9} reservation:<nil>}
	I1210 23:00:14.771735  205784 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb79d0}
	I1210 23:00:14.771761  205784 network_create.go:124] attempt to create docker network kubernetes-upgrade-000011 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 23:00:14.771818  205784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 kubernetes-upgrade-000011
	I1210 23:00:14.843528  205784 network_create.go:108] docker network kubernetes-upgrade-000011 192.168.103.0/24 created
	I1210 23:00:14.843564  205784 kic.go:121] calculated static IP "192.168.103.2" for the "kubernetes-upgrade-000011" container
	I1210 23:00:14.843664  205784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:00:14.866695  205784 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-000011 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:00:14.890263  205784 oci.go:103] Successfully created a docker volume kubernetes-upgrade-000011
	I1210 23:00:14.890348  205784 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-000011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --entrypoint /usr/bin/test -v kubernetes-upgrade-000011:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:00:15.474079  205784 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-000011
	I1210 23:00:15.474161  205784 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:00:15.474177  205784 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:00:15.474275  205784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-000011:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:00:15.008797  203885 oci.go:107] Successfully prepared a docker volume missing-upgrade-628477
	I1210 23:00:15.008826  203885 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1210 23:00:15.008865  203885 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:00:15.008941  203885 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-628477:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:00:19.887866  205002 pod_ready.go:94] pod "kube-scheduler-pause-615194" is "Ready"
	I1210 23:00:19.887892  205002 pod_ready.go:86] duration metric: took 570.307918ms for pod "kube-scheduler-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.887902  205002 pod_ready.go:40] duration metric: took 1.774609971s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:00:19.933033  205002 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:00:20.064405  205002 out.go:179] * Done! kubectl is now configured to use "pause-615194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.191090169Z" level=info msg="RDT not available in the host system"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.191109448Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192085342Z" level=info msg="Conmon does support the --sync option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.1921055Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192118953Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192939359Z" level=info msg="Conmon does support the --sync option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192955606Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.197780195Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.197821637Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.198892267Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.199712104Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.199861363Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.290326715Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-rcw4l Namespace:kube-system ID:aad3bb39104ed6e8f68b930015274b2444793e44694f9348b72449446f9d65cf UID:152633ef-75ee-401c-8f62-68ecef534501 NetNS:/var/run/netns/06562862-1a96-4bcd-943c-86b807c37667 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132c58}] Aliases:map[]}"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.290550633Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-rcw4l for CNI network kindnet (type=ptp)"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291001633Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291030348Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291083824Z" level=info msg="Create NRI interface"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291193416Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291206806Z" level=info msg="runtime interface created"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291219376Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291299887Z" level=info msg="runtime interface starting up..."
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291308853Z" level=info msg="starting plugins..."
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291325983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.29174152Z" level=info msg="No systemd watchdog enabled"
	Dec 10 23:00:16 pause-615194 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2f0a8ad412f18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   aad3bb39104ed       coredns-66bc5c9577-rcw4l               kube-system
	b8cf198856b60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   8c77ec48e3eb2       kindnet-7s4fz                          kube-system
	0a6eec3686ed6       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   27 seconds ago      Running             kube-proxy                0                   84953f47f58a7       kube-proxy-gg5fh                       kube-system
	7da3e72ac7034       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   38 seconds ago      Running             etcd                      0                   5587595e4a72d       etcd-pause-615194                      kube-system
	9ce1af4a96891       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   38 seconds ago      Running             kube-apiserver            0                   1b27d30bc6abe       kube-apiserver-pause-615194            kube-system
	31472513a1dc5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   38 seconds ago      Running             kube-controller-manager   0                   69183bb1c0dbf       kube-controller-manager-pause-615194   kube-system
	787e9326334ee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   38 seconds ago      Running             kube-scheduler            0                   e7e1b446627ee       kube-scheduler-pause-615194            kube-system
	
	
	==> coredns [2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44699 - 62466 "HINFO IN 8549627779733484000.7721476600492274986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022929763s
	
	
	==> describe nodes <==
	Name:               pause-615194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-615194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=pause-615194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_59_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:59:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-615194
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:00:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 23:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-615194
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                a7b3df45-5a23-4710-b912-c560621ca83d
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rcw4l                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-615194                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-7s4fz                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-615194             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-615194    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-gg5fh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-615194             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node pause-615194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node pause-615194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node pause-615194 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node pause-615194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node pause-615194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node pause-615194 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node pause-615194 event: Registered Node pause-615194 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-615194 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303] <==
	{"level":"warn","ts":"2025-12-10T22:59:46.965828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.974726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.983546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.994829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.009055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.018417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.026766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.036305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.044875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.054549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.068154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.081508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.098023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.132864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.142703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.151593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.161096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.171468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.180326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.194687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.201561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.209798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.269008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:00:19.886539Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.93638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-615194\" limit:1 ","response":"range_response_count:1 size:5412"}
	{"level":"info","ts":"2025-12-10T23:00:19.886685Z","caller":"traceutil/trace.go:172","msg":"trace[881253544] range","detail":"{range_begin:/registry/minions/pause-615194; range_end:; response_count:1; response_revision:442; }","duration":"170.063265ms","start":"2025-12-10T23:00:19.716573Z","end":"2025-12-10T23:00:19.886637Z","steps":["trace[881253544] 'range keys from in-memory index tree'  (duration: 169.799613ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:00:23 up 42 min,  0 user,  load average: 4.74, 1.94, 1.29
	Linux pause-615194 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d] <==
	I1210 22:59:56.511055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 22:59:56.511374       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 22:59:56.511539       1 main.go:148] setting mtu 1500 for CNI 
	I1210 22:59:56.511555       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 22:59:56.511577       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T22:59:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 22:59:56.728803       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 22:59:56.728962       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 22:59:56.728988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 22:59:56.729133       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 22:59:57.146063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 22:59:57.225938       1 metrics.go:72] Registering metrics
	I1210 22:59:57.226058       1 controller.go:711] "Syncing nftables rules"
	I1210 23:00:06.729472       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:00:06.729568       1 main.go:301] handling current node
	I1210 23:00:16.736734       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:00:16.736765       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c] <==
	I1210 22:59:47.886516       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1210 22:59:47.887007       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 22:59:47.891270       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:47.891574       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 22:59:47.898475       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:47.898802       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 22:59:47.905471       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 22:59:48.072994       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 22:59:48.784103       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 22:59:48.795562       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 22:59:48.795683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 22:59:49.371403       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 22:59:49.414458       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 22:59:49.486478       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 22:59:49.493202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 22:59:49.494331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 22:59:49.499517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 22:59:49.811873       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 22:59:50.288851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 22:59:50.310376       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 22:59:50.326427       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 22:59:55.511877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 22:59:55.565938       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:55.569284       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:55.813464       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398] <==
	I1210 22:59:54.902389       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 22:59:54.907052       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 22:59:54.907368       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 22:59:54.907464       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 22:59:54.907580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:59:54.908004       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 22:59:54.908043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 22:59:54.908043       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 22:59:54.907980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 22:59:54.908366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 22:59:54.909508       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 22:59:54.909576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 22:59:54.912773       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 22:59:54.913942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 22:59:54.914265       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 22:59:54.914306       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 22:59:54.914314       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 22:59:54.914322       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 22:59:54.918434       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 22:59:54.920867       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 22:59:54.920872       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 22:59:54.929159       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 22:59:54.936865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-615194" podCIDRs=["10.244.0.0/24"]
	I1210 22:59:54.938952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:00:09.859527       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5] <==
	I1210 22:59:56.262473       1 server_linux.go:53] "Using iptables proxy"
	I1210 22:59:56.347135       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 22:59:56.447684       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 22:59:56.447731       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 22:59:56.447811       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:59:56.473081       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 22:59:56.473146       1 server_linux.go:132] "Using iptables Proxier"
	I1210 22:59:56.479270       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:59:56.479696       1 server.go:527] "Version info" version="v1.34.2"
	I1210 22:59:56.479725       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:59:56.482689       1 config.go:200] "Starting service config controller"
	I1210 22:59:56.482720       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:59:56.482812       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:59:56.482829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:59:56.483027       1 config.go:309] "Starting node config controller"
	I1210 22:59:56.483040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:59:56.483047       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 22:59:56.483179       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:59:56.483204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:59:56.583417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:59:56.583420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 22:59:56.587278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554] <==
	E1210 22:59:47.842329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:59:47.842490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 22:59:47.842515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 22:59:47.842544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:59:47.842556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:59:47.842587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 22:59:47.842585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:59:47.842602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:59:47.842622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:59:47.842816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:59:47.842827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 22:59:48.648035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 22:59:48.658273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:59:48.676682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:59:48.798591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:59:48.805795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 22:59:48.818378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:59:49.017469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:59:49.031778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 22:59:49.032006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:59:49.103039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 22:59:49.166629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:59:49.185873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:59:49.296603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 22:59:51.140061       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 22:59:56 pause-615194 kubelet[1300]: I1210 22:59:56.340732    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gg5fh" podStartSLOduration=1.340705025 podStartE2EDuration="1.340705025s" podCreationTimestamp="2025-12-10 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 22:59:56.340596733 +0000 UTC m=+6.248705065" watchObservedRunningTime="2025-12-10 22:59:56.340705025 +0000 UTC m=+6.248813374"
	Dec 10 22:59:56 pause-615194 kubelet[1300]: I1210 22:59:56.355953    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7s4fz" podStartSLOduration=1.355925814 podStartE2EDuration="1.355925814s" podCreationTimestamp="2025-12-10 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 22:59:56.35511557 +0000 UTC m=+6.263223901" watchObservedRunningTime="2025-12-10 22:59:56.355925814 +0000 UTC m=+6.264034147"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.796402    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.857457    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzslx\" (UniqueName: \"kubernetes.io/projected/152633ef-75ee-401c-8f62-68ecef534501-kube-api-access-jzslx\") pod \"coredns-66bc5c9577-rcw4l\" (UID: \"152633ef-75ee-401c-8f62-68ecef534501\") " pod="kube-system/coredns-66bc5c9577-rcw4l"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.857529    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152633ef-75ee-401c-8f62-68ecef534501-config-volume\") pod \"coredns-66bc5c9577-rcw4l\" (UID: \"152633ef-75ee-401c-8f62-68ecef534501\") " pod="kube-system/coredns-66bc5c9577-rcw4l"
	Dec 10 23:00:07 pause-615194 kubelet[1300]: I1210 23:00:07.366055    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rcw4l" podStartSLOduration=11.366021218 podStartE2EDuration="11.366021218s" podCreationTimestamp="2025-12-10 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:00:07.365770895 +0000 UTC m=+17.273879258" watchObservedRunningTime="2025-12-10 23:00:07.366021218 +0000 UTC m=+17.274129549"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.294273    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294384    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294465    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294484    1300 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294498    1300 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.360954    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.361024    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.361041    1300 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.394579    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.525253    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.771473    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: W1210 23:00:13.261746    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.361920    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.361986    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.362002    1300 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:20 pause-615194 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:00:20 pause-615194 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:00:20 pause-615194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:00:20 pause-615194 systemd[1]: kubelet.service: Consumed 1.427s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-615194 -n pause-615194
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-615194 -n pause-615194: exit status 2 (338.598957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-615194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-615194
helpers_test.go:244: (dbg) docker inspect pause-615194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788",
	        "Created": "2025-12-10T22:59:31.338358415Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T22:59:31.399378281Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/hosts",
	        "LogPath": "/var/lib/docker/containers/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788/4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788-json.log",
	        "Name": "/pause-615194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-615194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-615194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e71e18d172ae2d74afb9b08510d5682af1b626c943dca60b2aef7e3bdbe4788",
	                "LowerDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/204efe3d9c9893ef33e3afce315fccc5cdd63899e655fbd6e01117756a30de76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-615194",
	                "Source": "/var/lib/docker/volumes/pause-615194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-615194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-615194",
	                "name.minikube.sigs.k8s.io": "pause-615194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a16c5a3e9abd3177fef3b2ebd898a2378e1f3635da2db42eaa1948a9c672fc10",
	            "SandboxKey": "/var/run/docker/netns/a16c5a3e9abd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-615194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4aa16abc5ae09604a9dd56dacc0bd354963dea1684cd40a2b6eac93b07601e7",
	                    "EndpointID": "f7188886908a418ef82e58a2255b4a75ea1416e8a0997fd9884eae9a6fecca08",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9a:e0:46:00:30:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-615194",
	                        "4e71e18d172a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-615194 -n pause-615194
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-615194 -n pause-615194: exit status 2 (339.409773ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-615194 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-615194 logs -n 25: (1.005482326s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-293031                                                                                                                   │ test-preload-293031         │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ start   │ -p scheduled-stop-230539 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --cancel-scheduled                                                                                              │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:57 UTC │ 10 Dec 25 22:57 UTC │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │                     │
	│ stop    │ -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:58 UTC │ 10 Dec 25 22:58 UTC │
	│ delete  │ -p scheduled-stop-230539                                                                                                                 │ scheduled-stop-230539       │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p insufficient-storage-351646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-351646 │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │                     │
	│ delete  │ -p insufficient-storage-351646                                                                                                           │ insufficient-storage-351646 │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p offline-crio-615390 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-615390         │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p cert-expiration-669067 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-669067      │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p pause-615194 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p force-systemd-env-634162 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-634162    │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ delete  │ -p force-systemd-env-634162                                                                                                              │ force-systemd-env-634162    │ jenkins │ v1.37.0 │ 10 Dec 25 22:59 UTC │ 10 Dec 25 22:59 UTC │
	│ start   │ -p missing-upgrade-628477 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-628477      │ jenkins │ v1.35.0 │ 10 Dec 25 22:59 UTC │                     │
	│ delete  │ -p offline-crio-615390                                                                                                                   │ offline-crio-615390         │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p pause-615194 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │ 10 Dec 25 23:00 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-000011   │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │                     │
	│ pause   │ -p pause-615194 --alsologtostderr -v=5                                                                                                   │ pause-615194                │ jenkins │ v1.37.0 │ 10 Dec 25 23:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:00:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:00:14.468965  205784 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:00:14.469257  205784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:14.469301  205784 out.go:374] Setting ErrFile to fd 2...
	I1210 23:00:14.469312  205784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:00:14.469611  205784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:00:14.470191  205784 out.go:368] Setting JSON to false
	I1210 23:00:14.471354  205784 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2556,"bootTime":1765405058,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:00:14.471420  205784 start.go:143] virtualization: kvm guest
	I1210 23:00:14.473465  205784 out.go:179] * [kubernetes-upgrade-000011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:00:14.474945  205784 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:00:14.474957  205784 notify.go:221] Checking for updates...
	I1210 23:00:14.478126  205784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:00:14.479819  205784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:00:14.484357  205784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:00:14.485839  205784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:00:14.487203  205784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:00:14.489196  205784 config.go:182] Loaded profile config "cert-expiration-669067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:14.489354  205784 config.go:182] Loaded profile config "missing-upgrade-628477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:00:14.489564  205784 config.go:182] Loaded profile config "pause-615194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:14.489703  205784 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:00:14.518656  205784 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:00:14.518780  205784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:00:14.586297  205784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 23:00:14.574283467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:00:14.586453  205784 docker.go:319] overlay module found
	I1210 23:00:14.589715  205784 out.go:179] * Using the docker driver based on user configuration
	I1210 23:00:14.591204  205784 start.go:309] selected driver: docker
	I1210 23:00:14.591222  205784 start.go:927] validating driver "docker" against <nil>
	I1210 23:00:14.591234  205784 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:00:14.591869  205784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:00:14.656414  205784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:68 SystemTime:2025-12-10 23:00:14.645850905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:00:14.656587  205784 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:00:14.656874  205784 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 23:00:14.659862  205784 out.go:179] * Using Docker driver with root privileges
	I1210 23:00:14.661579  205784 cni.go:84] Creating CNI manager for ""
	I1210 23:00:14.661665  205784 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:00:14.661681  205784 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:00:14.661773  205784 start.go:353] cluster config:
	{Name:kubernetes-upgrade-000011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-000011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:00:14.667236  205784 out.go:179] * Starting "kubernetes-upgrade-000011" primary control-plane node in "kubernetes-upgrade-000011" cluster
	I1210 23:00:14.668929  205784 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:00:14.670665  205784 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:00:14.341991  203885 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:00:14.342316  203885 start.go:159] libmachine.API.Create for "missing-upgrade-628477" (driver="docker")
	I1210 23:00:14.342354  203885 client.go:168] LocalClient.Create starting
	I1210 23:00:14.342426  203885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:00:14.342467  203885 main.go:141] libmachine: Decoding PEM data...
	I1210 23:00:14.342482  203885 main.go:141] libmachine: Parsing certificate...
	I1210 23:00:14.342573  203885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:00:14.342601  203885 main.go:141] libmachine: Decoding PEM data...
	I1210 23:00:14.342612  203885 main.go:141] libmachine: Parsing certificate...
	I1210 23:00:14.343086  203885 cli_runner.go:164] Run: docker network inspect missing-upgrade-628477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:00:14.363272  203885 cli_runner.go:211] docker network inspect missing-upgrade-628477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:00:14.363343  203885 network_create.go:284] running [docker network inspect missing-upgrade-628477] to gather additional debugging logs...
	I1210 23:00:14.363362  203885 cli_runner.go:164] Run: docker network inspect missing-upgrade-628477
	W1210 23:00:14.384344  203885 cli_runner.go:211] docker network inspect missing-upgrade-628477 returned with exit code 1
	I1210 23:00:14.384364  203885 network_create.go:287] error running [docker network inspect missing-upgrade-628477]: docker network inspect missing-upgrade-628477: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-628477 not found
	I1210 23:00:14.384383  203885 network_create.go:289] output of [docker network inspect missing-upgrade-628477]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-628477 not found
	
	** /stderr **
	I1210 23:00:14.384474  203885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:14.402519  203885 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:00:14.403198  203885 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:00:14.403846  203885 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:00:14.404580  203885 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c4aa16abc5ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:5d:47:86:ab:fe} reservation:<nil>}
	I1210 23:00:14.405421  203885 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-005ef8a21e87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:46:0a:99:96:13} reservation:<nil>}
	I1210 23:00:14.406216  203885 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b82af0}
	I1210 23:00:14.406236  203885 network_create.go:124] attempt to create docker network missing-upgrade-628477 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:00:14.406290  203885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-628477 missing-upgrade-628477
	I1210 23:00:14.464564  203885 network_create.go:108] docker network missing-upgrade-628477 192.168.94.0/24 created
	I1210 23:00:14.464597  203885 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-628477" container
	I1210 23:00:14.464705  203885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:00:14.487788  203885 cli_runner.go:164] Run: docker volume create missing-upgrade-628477 --label name.minikube.sigs.k8s.io=missing-upgrade-628477 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:00:14.510617  203885 oci.go:103] Successfully created a docker volume missing-upgrade-628477
	I1210 23:00:14.510745  203885 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-628477-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-628477 --entrypoint /usr/bin/test -v missing-upgrade-628477:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1210 23:00:14.672148  205784 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:00:14.672206  205784 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 23:00:14.672240  205784 cache.go:65] Caching tarball of preloaded images
	I1210 23:00:14.672340  205784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:00:14.672400  205784 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:00:14.672414  205784 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 23:00:14.672564  205784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/config.json ...
	I1210 23:00:14.672598  205784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/config.json: {Name:mkb3fff544a2a99b0b6e4089d74f800ea6496126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:14.701009  205784 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:00:14.701032  205784 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:00:14.701052  205784 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:00:14.701092  205784 start.go:360] acquireMachinesLock for kubernetes-upgrade-000011: {Name:mk3bb2603eb5718897233c9748a9e145f39c334c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:00:14.701214  205784 start.go:364] duration metric: took 96.008µs to acquireMachinesLock for "kubernetes-upgrade-000011"
	I1210 23:00:14.701244  205784 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-000011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-000011 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:00:14.701309  205784 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:00:14.426119  205002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:00:14.433081  205002 fix.go:56] duration metric: took 3.738335473s for fixHost
	I1210 23:00:14.433116  205002 start.go:83] releasing machines lock for "pause-615194", held for 3.738396397s
	I1210 23:00:14.433192  205002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-615194
	I1210 23:00:14.455917  205002 ssh_runner.go:195] Run: cat /version.json
	I1210 23:00:14.455971  205002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-615194
	I1210 23:00:14.456002  205002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:00:14.456066  205002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-615194
	I1210 23:00:14.479255  205002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/pause-615194/id_rsa Username:docker}
	I1210 23:00:14.479382  205002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/pause-615194/id_rsa Username:docker}
	I1210 23:00:14.580694  205002 ssh_runner.go:195] Run: systemctl --version
	I1210 23:00:14.653903  205002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:00:14.699957  205002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:00:14.705590  205002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:00:14.705672  205002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:00:14.714972  205002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:00:14.714998  205002 start.go:496] detecting cgroup driver to use...
	I1210 23:00:14.715030  205002 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:00:14.715214  205002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:00:14.736192  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:00:14.751875  205002 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:00:14.751934  205002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:00:14.770276  205002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:00:14.786506  205002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:00:14.938913  205002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:00:15.073928  205002 docker.go:234] disabling docker service ...
	I1210 23:00:15.073992  205002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:00:15.093963  205002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:00:15.111358  205002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:00:15.257983  205002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:00:15.415895  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:00:15.430344  205002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:00:15.445984  205002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:00:15.446103  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.457317  205002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:00:15.457399  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.468636  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.479075  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.488971  205002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:00:15.499797  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.526465  205002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.536459  205002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:00:15.584108  205002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:00:15.592492  205002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:00:15.600588  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:15.718891  205002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:00:16.297807  205002 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:00:16.297876  205002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:00:16.304212  205002 start.go:564] Will wait 60s for crictl version
	I1210 23:00:16.304276  205002 ssh_runner.go:195] Run: which crictl
	I1210 23:00:16.308475  205002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:00:16.338867  205002 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:00:16.338955  205002 ssh_runner.go:195] Run: crio --version
	I1210 23:00:16.377106  205002 ssh_runner.go:195] Run: crio --version
	I1210 23:00:16.412206  205002 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:00:16.413603  205002 cli_runner.go:164] Run: docker network inspect pause-615194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:16.437447  205002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:00:16.442056  205002 kubeadm.go:884] updating cluster {Name:pause-615194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:00:16.442272  205002 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:00:16.442342  205002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:00:16.478038  205002 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:00:16.478062  205002 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:00:16.478106  205002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:00:16.506778  205002 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:00:16.506806  205002 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:00:16.506815  205002 kubeadm.go:935] updating node { 192.168.76.2  8443 v1.34.2 crio true true} ...
	I1210 23:00:16.506951  205002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-615194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:00:16.507047  205002 ssh_runner.go:195] Run: crio config
	I1210 23:00:16.564384  205002 cni.go:84] Creating CNI manager for ""
	I1210 23:00:16.564430  205002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:00:16.564445  205002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:00:16.564484  205002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-615194 NodeName:pause-615194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:00:16.564673  205002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-615194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:00:16.564761  205002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:00:16.576756  205002 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:00:16.576826  205002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:00:16.586743  205002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 23:00:16.600720  205002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:00:16.616876  205002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1210 23:00:16.631468  205002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:00:16.636753  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:16.788605  205002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:00:16.803112  205002 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194 for IP: 192.168.76.2
	I1210 23:00:16.803136  205002 certs.go:195] generating shared ca certs ...
	I1210 23:00:16.803155  205002 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:16.803369  205002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:00:16.803439  205002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:00:16.803455  205002 certs.go:257] generating profile certs ...
	I1210 23:00:16.803565  205002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key
	I1210 23:00:16.803679  205002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.key.09d18f28
	I1210 23:00:16.803741  205002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.key
	I1210 23:00:16.803917  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:00:16.803970  205002 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:00:16.803984  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:00:16.804023  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:00:16.804070  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:00:16.804105  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:00:16.804172  205002 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:00:16.804903  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:00:16.827281  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:00:16.851001  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:00:16.870582  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:00:16.890187  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:00:16.911067  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:00:16.933296  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:00:16.954294  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:00:16.974519  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:00:16.997775  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:00:17.020874  205002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:00:17.042624  205002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:00:17.057836  205002 ssh_runner.go:195] Run: openssl version
	I1210 23:00:17.064957  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.073541  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:00:17.082090  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.086633  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.086701  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:00:17.129383  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:00:17.137983  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.146803  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:00:17.156224  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.160870  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.160941  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:00:17.196400  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:00:17.207006  205002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.215447  205002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:00:17.226100  205002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.231594  205002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.231672  205002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:00:17.268681  205002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:00:17.277628  205002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:00:17.282194  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:00:17.332885  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:00:17.371394  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:00:17.418771  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:00:17.468674  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:00:17.515901  205002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:00:17.566608  205002 kubeadm.go:401] StartCluster: {Name:pause-615194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-615194 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvi
dia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:00:17.566775  205002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:00:17.566835  205002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:00:17.607295  205002 cri.go:89] found id: "2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c"
	I1210 23:00:17.607318  205002 cri.go:89] found id: "b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d"
	I1210 23:00:17.607324  205002 cri.go:89] found id: "0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5"
	I1210 23:00:17.607329  205002 cri.go:89] found id: "7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303"
	I1210 23:00:17.607333  205002 cri.go:89] found id: "9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c"
	I1210 23:00:17.607337  205002 cri.go:89] found id: "31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398"
	I1210 23:00:17.607339  205002 cri.go:89] found id: "787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554"
	I1210 23:00:17.607342  205002 cri.go:89] found id: ""
	I1210 23:00:17.607388  205002 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:00:17.623792  205002 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:00:17Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:00:17.623868  205002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:00:17.634295  205002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:00:17.634316  205002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:00:17.634365  205002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:00:17.644456  205002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:00:17.645122  205002 kubeconfig.go:125] found "pause-615194" server: "https://192.168.76.2:8443"
	I1210 23:00:17.645931  205002 kapi.go:59] client config for pause-615194: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:00:17.646391  205002 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 23:00:17.646417  205002 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 23:00:17.646421  205002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 23:00:17.646428  205002 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 23:00:17.646432  205002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 23:00:17.646848  205002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:00:17.656027  205002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 23:00:17.656069  205002 kubeadm.go:602] duration metric: took 21.746108ms to restartPrimaryControlPlane
	I1210 23:00:17.656080  205002 kubeadm.go:403] duration metric: took 89.487676ms to StartCluster
	I1210 23:00:17.656100  205002 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:17.656189  205002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:00:17.657018  205002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:00:17.680354  205002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:00:17.680422  205002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:00:17.680677  205002 config.go:182] Loaded profile config "pause-615194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:00:17.796851  205002 out.go:179] * Verifying Kubernetes components...
	I1210 23:00:17.796888  205002 out.go:179] * Enabled addons: 
	I1210 23:00:17.861944  205002 addons.go:530] duration metric: took 181.497537ms for enable addons: enabled=[]
	I1210 23:00:17.861981  205002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:00:17.986945  205002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:00:18.000655  205002 node_ready.go:35] waiting up to 6m0s for node "pause-615194" to be "Ready" ...
	I1210 23:00:18.009739  205002 node_ready.go:49] node "pause-615194" is "Ready"
	I1210 23:00:18.009770  205002 node_ready.go:38] duration metric: took 9.076812ms for node "pause-615194" to be "Ready" ...
	I1210 23:00:18.009787  205002 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:00:18.009843  205002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:00:18.022620  205002 api_server.go:72] duration metric: took 342.20322ms to wait for apiserver process to appear ...
	I1210 23:00:18.022660  205002 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:00:18.022685  205002 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:00:18.027045  205002 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 23:00:18.028046  205002 api_server.go:141] control plane version: v1.34.2
	I1210 23:00:18.028074  205002 api_server.go:131] duration metric: took 5.405155ms to wait for apiserver health ...
	I1210 23:00:18.028085  205002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:00:18.031530  205002 system_pods.go:59] 7 kube-system pods found
	I1210 23:00:18.031562  205002 system_pods.go:61] "coredns-66bc5c9577-rcw4l" [152633ef-75ee-401c-8f62-68ecef534501] Running
	I1210 23:00:18.031570  205002 system_pods.go:61] "etcd-pause-615194" [87151b12-02ab-48f1-9d23-c4e7cc1c86e3] Running
	I1210 23:00:18.031576  205002 system_pods.go:61] "kindnet-7s4fz" [8a7bab97-f5f0-4c87-9b64-567b5c26a5e6] Running
	I1210 23:00:18.031582  205002 system_pods.go:61] "kube-apiserver-pause-615194" [8e329430-7379-4b0f-8ab3-4fb88f1e0f77] Running
	I1210 23:00:18.031589  205002 system_pods.go:61] "kube-controller-manager-pause-615194" [1457b77d-25a2-4901-ae8b-ad4e52efce55] Running
	I1210 23:00:18.031596  205002 system_pods.go:61] "kube-proxy-gg5fh" [afaeee49-2e88-47ff-b34d-e636614ad430] Running
	I1210 23:00:18.031602  205002 system_pods.go:61] "kube-scheduler-pause-615194" [8319e2d9-3d29-468b-ab91-a55655a4d6e9] Running
	I1210 23:00:18.031611  205002 system_pods.go:74] duration metric: took 3.518449ms to wait for pod list to return data ...
	I1210 23:00:18.031625  205002 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:00:18.033859  205002 default_sa.go:45] found service account: "default"
	I1210 23:00:18.033884  205002 default_sa.go:55] duration metric: took 2.251079ms for default service account to be created ...
	I1210 23:00:18.033894  205002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:00:18.036657  205002 system_pods.go:86] 7 kube-system pods found
	I1210 23:00:18.036684  205002 system_pods.go:89] "coredns-66bc5c9577-rcw4l" [152633ef-75ee-401c-8f62-68ecef534501] Running
	I1210 23:00:18.036692  205002 system_pods.go:89] "etcd-pause-615194" [87151b12-02ab-48f1-9d23-c4e7cc1c86e3] Running
	I1210 23:00:18.036698  205002 system_pods.go:89] "kindnet-7s4fz" [8a7bab97-f5f0-4c87-9b64-567b5c26a5e6] Running
	I1210 23:00:18.036703  205002 system_pods.go:89] "kube-apiserver-pause-615194" [8e329430-7379-4b0f-8ab3-4fb88f1e0f77] Running
	I1210 23:00:18.036708  205002 system_pods.go:89] "kube-controller-manager-pause-615194" [1457b77d-25a2-4901-ae8b-ad4e52efce55] Running
	I1210 23:00:18.036714  205002 system_pods.go:89] "kube-proxy-gg5fh" [afaeee49-2e88-47ff-b34d-e636614ad430] Running
	I1210 23:00:18.036719  205002 system_pods.go:89] "kube-scheduler-pause-615194" [8319e2d9-3d29-468b-ab91-a55655a4d6e9] Running
	I1210 23:00:18.036729  205002 system_pods.go:126] duration metric: took 2.827988ms to wait for k8s-apps to be running ...
	I1210 23:00:18.036742  205002 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:00:18.036792  205002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:00:18.050630  205002 system_svc.go:56] duration metric: took 13.882227ms WaitForService to wait for kubelet
	I1210 23:00:18.050685  205002 kubeadm.go:587] duration metric: took 370.275106ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:00:18.050710  205002 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:00:18.053331  205002 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:00:18.053361  205002 node_conditions.go:123] node cpu capacity is 8
	I1210 23:00:18.053382  205002 node_conditions.go:105] duration metric: took 2.664842ms to run NodePressure ...
	I1210 23:00:18.053397  205002 start.go:242] waiting for startup goroutines ...
	I1210 23:00:18.053412  205002 start.go:247] waiting for cluster config update ...
	I1210 23:00:18.053426  205002 start.go:256] writing updated cluster config ...
	I1210 23:00:18.107539  205002 ssh_runner.go:195] Run: rm -f paused
	I1210 23:00:18.113252  205002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:00:18.114118  205002 kapi.go:59] client config for pause-615194: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/profiles/pause-615194/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:00:18.117352  205002 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcw4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.122188  205002 pod_ready.go:94] pod "coredns-66bc5c9577-rcw4l" is "Ready"
	I1210 23:00:18.122210  205002 pod_ready.go:86] duration metric: took 4.838463ms for pod "coredns-66bc5c9577-rcw4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.124074  205002 pod_ready.go:83] waiting for pod "etcd-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.128723  205002 pod_ready.go:94] pod "etcd-pause-615194" is "Ready"
	I1210 23:00:18.128751  205002 pod_ready.go:86] duration metric: took 4.645949ms for pod "etcd-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.130951  205002 pod_ready.go:83] waiting for pod "kube-apiserver-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.135291  205002 pod_ready.go:94] pod "kube-apiserver-pause-615194" is "Ready"
	I1210 23:00:18.135313  205002 pod_ready.go:86] duration metric: took 4.338978ms for pod "kube-apiserver-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.137442  205002 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.518400  205002 pod_ready.go:94] pod "kube-controller-manager-pause-615194" is "Ready"
	I1210 23:00:18.518429  205002 pod_ready.go:86] duration metric: took 380.961519ms for pod "kube-controller-manager-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:18.718028  205002 pod_ready.go:83] waiting for pod "kube-proxy-gg5fh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.118420  205002 pod_ready.go:94] pod "kube-proxy-gg5fh" is "Ready"
	I1210 23:00:19.118447  205002 pod_ready.go:86] duration metric: took 400.388437ms for pod "kube-proxy-gg5fh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.317558  205002 pod_ready.go:83] waiting for pod "kube-scheduler-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:14.703110  205784 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:00:14.703349  205784 start.go:159] libmachine.API.Create for "kubernetes-upgrade-000011" (driver="docker")
	I1210 23:00:14.703385  205784 client.go:173] LocalClient.Create starting
	I1210 23:00:14.703467  205784 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:00:14.703499  205784 main.go:143] libmachine: Decoding PEM data...
	I1210 23:00:14.703517  205784 main.go:143] libmachine: Parsing certificate...
	I1210 23:00:14.703577  205784 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:00:14.703595  205784 main.go:143] libmachine: Decoding PEM data...
	I1210 23:00:14.703607  205784 main.go:143] libmachine: Parsing certificate...
	I1210 23:00:14.703952  205784 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-000011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:00:14.725328  205784 cli_runner.go:211] docker network inspect kubernetes-upgrade-000011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:00:14.725414  205784 network_create.go:284] running [docker network inspect kubernetes-upgrade-000011] to gather additional debugging logs...
	I1210 23:00:14.725439  205784 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-000011
	W1210 23:00:14.747928  205784 cli_runner.go:211] docker network inspect kubernetes-upgrade-000011 returned with exit code 1
	I1210 23:00:14.747955  205784 network_create.go:287] error running [docker network inspect kubernetes-upgrade-000011]: docker network inspect kubernetes-upgrade-000011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-000011 not found
	I1210 23:00:14.747967  205784 network_create.go:289] output of [docker network inspect kubernetes-upgrade-000011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-000011 not found
	
	** /stderr **
	I1210 23:00:14.748091  205784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:00:14.767136  205784 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:00:14.767788  205784 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:00:14.768473  205784 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:00:14.769166  205784 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c4aa16abc5ae IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:5d:47:86:ab:fe} reservation:<nil>}
	I1210 23:00:14.770032  205784 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-005ef8a21e87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:46:0a:99:96:13} reservation:<nil>}
	I1210 23:00:14.770888  205784 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-59c76f53fda0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:1a:60:ec:7a:83:f9} reservation:<nil>}
	I1210 23:00:14.771735  205784 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb79d0}
	I1210 23:00:14.771761  205784 network_create.go:124] attempt to create docker network kubernetes-upgrade-000011 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 23:00:14.771818  205784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 kubernetes-upgrade-000011
	I1210 23:00:14.843528  205784 network_create.go:108] docker network kubernetes-upgrade-000011 192.168.103.0/24 created
	I1210 23:00:14.843564  205784 kic.go:121] calculated static IP "192.168.103.2" for the "kubernetes-upgrade-000011" container
	I1210 23:00:14.843664  205784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:00:14.866695  205784 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-000011 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:00:14.890263  205784 oci.go:103] Successfully created a docker volume kubernetes-upgrade-000011
	I1210 23:00:14.890348  205784 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-000011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --entrypoint /usr/bin/test -v kubernetes-upgrade-000011:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:00:15.474079  205784 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-000011
	I1210 23:00:15.474161  205784 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:00:15.474177  205784 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:00:15.474275  205784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-000011:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:00:15.008797  203885 oci.go:107] Successfully prepared a docker volume missing-upgrade-628477
	I1210 23:00:15.008826  203885 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1210 23:00:15.008865  203885 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:00:15.008941  203885 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-628477:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:00:19.887866  205002 pod_ready.go:94] pod "kube-scheduler-pause-615194" is "Ready"
	I1210 23:00:19.887892  205002 pod_ready.go:86] duration metric: took 570.307918ms for pod "kube-scheduler-pause-615194" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:00:19.887902  205002 pod_ready.go:40] duration metric: took 1.774609971s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:00:19.933033  205002 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:00:20.064405  205002 out.go:179] * Done! kubectl is now configured to use "pause-615194" cluster and "default" namespace by default
	I1210 23:00:21.705324  205784 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-000011:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (6.230998716s)
	I1210 23:00:21.705363  205784 kic.go:203] duration metric: took 6.231181806s to extract preloaded images to volume ...
	W1210 23:00:21.705469  205784 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:00:21.705518  205784 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:00:21.705567  205784 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:00:21.768305  205784 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-000011 --name kubernetes-upgrade-000011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-000011 --network kubernetes-upgrade-000011 --ip 192.168.103.2 --volume kubernetes-upgrade-000011:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:00:22.120288  205784 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-000011 --format={{.State.Running}}
	I1210 23:00:22.141169  205784 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-000011 --format={{.State.Status}}
	I1210 23:00:22.160732  205784 cli_runner.go:164] Run: docker exec kubernetes-upgrade-000011 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:00:22.213412  205784 oci.go:144] the created container "kubernetes-upgrade-000011" has a running status.
	I1210 23:00:22.213458  205784 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/kubernetes-upgrade-000011/id_rsa...
	I1210 23:00:22.285008  205784 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/kubernetes-upgrade-000011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:00:22.315243  205784 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-000011 --format={{.State.Status}}
	I1210 23:00:22.337214  205784 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:00:22.337236  205784 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-000011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:00:22.398282  205784 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-000011 --format={{.State.Status}}
	I1210 23:00:22.420561  205784 machine.go:94] provisionDockerMachine start ...
	I1210 23:00:22.420707  205784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-000011
	I1210 23:00:22.450611  205784 main.go:143] libmachine: Using SSH client type: native
	I1210 23:00:22.451199  205784 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1210 23:00:22.451253  205784 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:00:22.452460  205784 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40448->127.0.0.1:32999: read: connection reset by peer
	I1210 23:00:21.704971  203885 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-628477:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (6.695990518s)
	I1210 23:00:21.704996  203885 kic.go:203] duration metric: took 6.696130542s to extract preloaded images to volume ...
	W1210 23:00:21.705086  203885 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:00:21.705114  203885 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:00:21.705153  203885 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:00:21.767999  203885 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-628477 --name missing-upgrade-628477 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-628477 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-628477 --network missing-upgrade-628477 --ip 192.168.94.2 --volume missing-upgrade-628477:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1210 23:00:22.231803  203885 cli_runner.go:164] Run: docker container inspect missing-upgrade-628477 --format={{.State.Running}}
	I1210 23:00:22.254787  203885 cli_runner.go:164] Run: docker container inspect missing-upgrade-628477 --format={{.State.Status}}
	I1210 23:00:22.283753  203885 cli_runner.go:164] Run: docker exec missing-upgrade-628477 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:00:22.335409  203885 oci.go:144] the created container "missing-upgrade-628477" has a running status.
	I1210 23:00:22.335436  203885 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa...
	I1210 23:00:22.422177  203885 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:00:22.463101  203885 cli_runner.go:164] Run: docker container inspect missing-upgrade-628477 --format={{.State.Status}}
	I1210 23:00:22.500061  203885 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:00:22.500084  203885 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-628477 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:00:22.558716  203885 cli_runner.go:164] Run: docker container inspect missing-upgrade-628477 --format={{.State.Status}}
	I1210 23:00:22.597186  203885 machine.go:93] provisionDockerMachine start ...
	I1210 23:00:22.597303  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:22.627492  203885 main.go:141] libmachine: Using SSH client type: native
	I1210 23:00:22.627908  203885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1210 23:00:22.627995  203885 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 23:00:22.778902  203885 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-628477
	
	I1210 23:00:22.778925  203885 ubuntu.go:169] provisioning hostname "missing-upgrade-628477"
	I1210 23:00:22.779226  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:22.806283  203885 main.go:141] libmachine: Using SSH client type: native
	I1210 23:00:22.806538  203885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1210 23:00:22.806551  203885 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-628477 && echo "missing-upgrade-628477" | sudo tee /etc/hostname
	I1210 23:00:22.962227  203885 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-628477
	
	I1210 23:00:22.962316  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:22.982197  203885 main.go:141] libmachine: Using SSH client type: native
	I1210 23:00:22.982448  203885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1210 23:00:22.982462  203885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-628477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-628477/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-628477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:00:23.116460  203885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:00:23.116481  203885 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:00:23.116530  203885 ubuntu.go:177] setting up certificates
	I1210 23:00:23.116544  203885 provision.go:84] configureAuth start
	I1210 23:00:23.116611  203885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-628477
	I1210 23:00:23.138235  203885 provision.go:143] copyHostCerts
	I1210 23:00:23.138301  203885 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:00:23.138309  203885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:00:23.138380  203885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:00:23.138471  203885 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:00:23.138475  203885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:00:23.138501  203885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:00:23.138559  203885 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:00:23.138563  203885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:00:23.138585  203885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:00:23.138631  203885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-628477 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-628477]
	I1210 23:00:23.366442  203885 provision.go:177] copyRemoteCerts
	I1210 23:00:23.366498  203885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:00:23.366532  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:23.384725  203885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa Username:docker}
	I1210 23:00:23.480003  203885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:00:23.511485  203885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 23:00:23.539750  203885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:00:23.569290  203885 provision.go:87] duration metric: took 452.730065ms to configureAuth
	I1210 23:00:23.569313  203885 ubuntu.go:193] setting minikube options for container-runtime
	I1210 23:00:23.569534  203885 config.go:182] Loaded profile config "missing-upgrade-628477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:00:23.569676  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:23.588685  203885 main.go:141] libmachine: Using SSH client type: native
	I1210 23:00:23.588930  203885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I1210 23:00:23.588947  203885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:00:23.842545  203885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:00:23.842566  203885 machine.go:96] duration metric: took 1.245363001s to provisionDockerMachine
	I1210 23:00:23.842577  203885 client.go:171] duration metric: took 9.500218257s to LocalClient.Create
	I1210 23:00:23.842617  203885 start.go:167] duration metric: took 9.500282212s to libmachine.API.Create "missing-upgrade-628477"
	I1210 23:00:23.842625  203885 start.go:293] postStartSetup for "missing-upgrade-628477" (driver="docker")
	I1210 23:00:23.842690  203885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:00:23.842746  203885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:00:23.842793  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:23.862138  203885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa Username:docker}
	I1210 23:00:23.960842  203885 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:00:23.964746  203885 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:00:23.964770  203885 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1210 23:00:23.964777  203885 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1210 23:00:23.964782  203885 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1210 23:00:23.964792  203885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:00:23.964846  203885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:00:23.964910  203885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:00:23.964991  203885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:00:23.975020  203885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:00:24.005372  203885 start.go:296] duration metric: took 162.732272ms for postStartSetup
	I1210 23:00:24.005758  203885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-628477
	I1210 23:00:24.025241  203885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/missing-upgrade-628477/config.json ...
	I1210 23:00:24.025698  203885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:00:24.025760  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:24.047156  203885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa Username:docker}
	I1210 23:00:24.140272  203885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:00:24.145312  203885 start.go:128] duration metric: took 9.805214691s to createHost
	I1210 23:00:24.145331  203885 start.go:83] releasing machines lock for "missing-upgrade-628477", held for 9.805372281s
	I1210 23:00:24.145407  203885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-628477
	I1210 23:00:24.166969  203885 ssh_runner.go:195] Run: cat /version.json
	I1210 23:00:24.167015  203885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:00:24.167031  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:24.167067  203885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-628477
	I1210 23:00:24.188562  203885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa Username:docker}
	I1210 23:00:24.189450  203885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/missing-upgrade-628477/id_rsa Username:docker}
	I1210 23:00:24.280848  203885 ssh_runner.go:195] Run: systemctl --version
	I1210 23:00:24.355961  203885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:00:24.502453  203885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 23:00:24.507522  203885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:00:24.535056  203885 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1210 23:00:24.535130  203885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:00:24.572727  203885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1210 23:00:24.572744  203885 start.go:495] detecting cgroup driver to use...
	I1210 23:00:24.572775  203885 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:00:24.572844  203885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:00:24.591462  203885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:00:24.605556  203885 docker.go:217] disabling cri-docker service (if available) ...
	I1210 23:00:24.605604  203885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:00:24.620992  203885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:00:24.637181  203885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> CRI-O <==
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.191090169Z" level=info msg="RDT not available in the host system"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.191109448Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192085342Z" level=info msg="Conmon does support the --sync option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.1921055Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192118953Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192939359Z" level=info msg="Conmon does support the --sync option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.192955606Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.197780195Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.197821637Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.198892267Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.199712104Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.199861363Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.290326715Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-rcw4l Namespace:kube-system ID:aad3bb39104ed6e8f68b930015274b2444793e44694f9348b72449446f9d65cf UID:152633ef-75ee-401c-8f62-68ecef534501 NetNS:/var/run/netns/06562862-1a96-4bcd-943c-86b807c37667 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132c58}] Aliases:map[]}"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.290550633Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-rcw4l for CNI network kindnet (type=ptp)"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291001633Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291030348Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291083824Z" level=info msg="Create NRI interface"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291193416Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291206806Z" level=info msg="runtime interface created"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291219376Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291299887Z" level=info msg="runtime interface starting up..."
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291308853Z" level=info msg="starting plugins..."
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.291325983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 23:00:16 pause-615194 crio[2160]: time="2025-12-10T23:00:16.29174152Z" level=info msg="No systemd watchdog enabled"
	Dec 10 23:00:16 pause-615194 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2f0a8ad412f18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago      Running             coredns                   0                   aad3bb39104ed       coredns-66bc5c9577-rcw4l               kube-system
	b8cf198856b60       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   29 seconds ago      Running             kindnet-cni               0                   8c77ec48e3eb2       kindnet-7s4fz                          kube-system
	0a6eec3686ed6       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   29 seconds ago      Running             kube-proxy                0                   84953f47f58a7       kube-proxy-gg5fh                       kube-system
	7da3e72ac7034       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   39 seconds ago      Running             etcd                      0                   5587595e4a72d       etcd-pause-615194                      kube-system
	9ce1af4a96891       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   39 seconds ago      Running             kube-apiserver            0                   1b27d30bc6abe       kube-apiserver-pause-615194            kube-system
	31472513a1dc5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   39 seconds ago      Running             kube-controller-manager   0                   69183bb1c0dbf       kube-controller-manager-pause-615194   kube-system
	787e9326334ee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   39 seconds ago      Running             kube-scheduler            0                   e7e1b446627ee       kube-scheduler-pause-615194            kube-system
	
	
	==> coredns [2f0a8ad412f1823ad5adfabe7fce04048b7c2eff686e0e53a4b45974a901512c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44699 - 62466 "HINFO IN 8549627779733484000.7721476600492274986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022929763s
	
	
	==> describe nodes <==
	Name:               pause-615194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-615194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=pause-615194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_59_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:59:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-615194
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:00:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 22:59:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:00:06 +0000   Wed, 10 Dec 2025 23:00:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-615194
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                a7b3df45-5a23-4710-b912-c560621ca83d
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rcw4l                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-615194                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-7s4fz                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-pause-615194             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-615194    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-gg5fh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-pause-615194             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node pause-615194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node pause-615194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node pause-615194 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node pause-615194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node pause-615194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet          Node pause-615194 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node pause-615194 event: Registered Node pause-615194 in Controller
	  Normal  NodeReady                19s                kubelet          Node pause-615194 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [7da3e72ac703488e591a0c2aeb523b64d92fc632b7ebe169f577a0bb527ac303] <==
	{"level":"warn","ts":"2025-12-10T22:59:46.965828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.974726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.983546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:46.994829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.009055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.018417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.026766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.036305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.044875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.054549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.068154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.081508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.098023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.132864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.142703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.151593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.161096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.171468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.180326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.194687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.201561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.209798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:59:47.269008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:00:19.886539Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.93638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-615194\" limit:1 ","response":"range_response_count:1 size:5412"}
	{"level":"info","ts":"2025-12-10T23:00:19.886685Z","caller":"traceutil/trace.go:172","msg":"trace[881253544] range","detail":"{range_begin:/registry/minions/pause-615194; range_end:; response_count:1; response_revision:442; }","duration":"170.063265ms","start":"2025-12-10T23:00:19.716573Z","end":"2025-12-10T23:00:19.886637Z","steps":["trace[881253544] 'range keys from in-memory index tree'  (duration: 169.799613ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:00:25 up 42 min,  0 user,  load average: 4.44, 1.93, 1.29
	Linux pause-615194 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8cf198856b60e6dac8ad4989c2bf9bb80fd75cf9136ccd438beed036b2fe74d] <==
	I1210 22:59:56.511055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 22:59:56.511374       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 22:59:56.511539       1 main.go:148] setting mtu 1500 for CNI 
	I1210 22:59:56.511555       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 22:59:56.511577       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T22:59:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 22:59:56.728803       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 22:59:56.728962       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 22:59:56.728988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 22:59:56.729133       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 22:59:57.146063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 22:59:57.225938       1 metrics.go:72] Registering metrics
	I1210 22:59:57.226058       1 controller.go:711] "Syncing nftables rules"
	I1210 23:00:06.729472       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:00:06.729568       1 main.go:301] handling current node
	I1210 23:00:16.736734       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:00:16.736765       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ce1af4a968911f2b800b1e98ad7812c78a5e3d06b1104c55cbbf8cedffb418c] <==
	I1210 22:59:47.886516       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1210 22:59:47.887007       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 22:59:47.891270       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:47.891574       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 22:59:47.898475       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:47.898802       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 22:59:47.905471       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 22:59:48.072994       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 22:59:48.784103       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 22:59:48.795562       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 22:59:48.795683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 22:59:49.371403       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 22:59:49.414458       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 22:59:49.486478       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 22:59:49.493202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 22:59:49.494331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 22:59:49.499517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 22:59:49.811873       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 22:59:50.288851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 22:59:50.310376       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 22:59:50.326427       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 22:59:55.511877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 22:59:55.565938       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:55.569284       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 22:59:55.813464       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [31472513a1dc5101459b3116d6253915e5c9d8201a36200f0c59f6bcb1ddf398] <==
	I1210 22:59:54.902389       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 22:59:54.907052       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 22:59:54.907368       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 22:59:54.907464       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 22:59:54.907580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 22:59:54.908004       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 22:59:54.908043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 22:59:54.908043       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 22:59:54.907980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 22:59:54.908366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 22:59:54.909508       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 22:59:54.909576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 22:59:54.912773       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 22:59:54.913942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 22:59:54.914265       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 22:59:54.914306       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 22:59:54.914314       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 22:59:54.914322       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 22:59:54.918434       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 22:59:54.920867       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 22:59:54.920872       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 22:59:54.929159       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 22:59:54.936865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-615194" podCIDRs=["10.244.0.0/24"]
	I1210 22:59:54.938952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:00:09.859527       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0a6eec3686ed6a672fde02c425568aa7685ed93828ae77ba478a043711595cc5] <==
	I1210 22:59:56.262473       1 server_linux.go:53] "Using iptables proxy"
	I1210 22:59:56.347135       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 22:59:56.447684       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 22:59:56.447731       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 22:59:56.447811       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:59:56.473081       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 22:59:56.473146       1 server_linux.go:132] "Using iptables Proxier"
	I1210 22:59:56.479270       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:59:56.479696       1 server.go:527] "Version info" version="v1.34.2"
	I1210 22:59:56.479725       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:59:56.482689       1 config.go:200] "Starting service config controller"
	I1210 22:59:56.482720       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:59:56.482812       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:59:56.482829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:59:56.483027       1 config.go:309] "Starting node config controller"
	I1210 22:59:56.483040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:59:56.483047       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 22:59:56.483179       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:59:56.483204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:59:56.583417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:59:56.583420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 22:59:56.587278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [787e9326334ee9dc0e7c987bc721c8ac46e8bf9ea50e354e296f418e34a33554] <==
	E1210 22:59:47.842329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:59:47.842490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 22:59:47.842515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 22:59:47.842544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:59:47.842556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:59:47.842587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 22:59:47.842585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:59:47.842602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:59:47.842622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:59:47.842816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:59:47.842827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 22:59:48.648035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 22:59:48.658273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:59:48.676682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:59:48.798591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:59:48.805795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 22:59:48.818378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:59:49.017469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:59:49.031778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 22:59:49.032006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:59:49.103039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 22:59:49.166629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:59:49.185873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:59:49.296603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 22:59:51.140061       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 22:59:56 pause-615194 kubelet[1300]: I1210 22:59:56.340732    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gg5fh" podStartSLOduration=1.340705025 podStartE2EDuration="1.340705025s" podCreationTimestamp="2025-12-10 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 22:59:56.340596733 +0000 UTC m=+6.248705065" watchObservedRunningTime="2025-12-10 22:59:56.340705025 +0000 UTC m=+6.248813374"
	Dec 10 22:59:56 pause-615194 kubelet[1300]: I1210 22:59:56.355953    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7s4fz" podStartSLOduration=1.355925814 podStartE2EDuration="1.355925814s" podCreationTimestamp="2025-12-10 22:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 22:59:56.35511557 +0000 UTC m=+6.263223901" watchObservedRunningTime="2025-12-10 22:59:56.355925814 +0000 UTC m=+6.264034147"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.796402    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.857457    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzslx\" (UniqueName: \"kubernetes.io/projected/152633ef-75ee-401c-8f62-68ecef534501-kube-api-access-jzslx\") pod \"coredns-66bc5c9577-rcw4l\" (UID: \"152633ef-75ee-401c-8f62-68ecef534501\") " pod="kube-system/coredns-66bc5c9577-rcw4l"
	Dec 10 23:00:06 pause-615194 kubelet[1300]: I1210 23:00:06.857529    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152633ef-75ee-401c-8f62-68ecef534501-config-volume\") pod \"coredns-66bc5c9577-rcw4l\" (UID: \"152633ef-75ee-401c-8f62-68ecef534501\") " pod="kube-system/coredns-66bc5c9577-rcw4l"
	Dec 10 23:00:07 pause-615194 kubelet[1300]: I1210 23:00:07.366055    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rcw4l" podStartSLOduration=11.366021218 podStartE2EDuration="11.366021218s" podCreationTimestamp="2025-12-10 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:00:07.365770895 +0000 UTC m=+17.273879258" watchObservedRunningTime="2025-12-10 23:00:07.366021218 +0000 UTC m=+17.274129549"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.294273    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294384    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294465    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294484    1300 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.294498    1300 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.360954    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.361024    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: E1210 23:00:12.361041    1300 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.394579    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.525253    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:12 pause-615194 kubelet[1300]: W1210 23:00:12.771473    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: W1210 23:00:13.261746    1300 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.361920    1300 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.361986    1300 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:13 pause-615194 kubelet[1300]: E1210 23:00:13.362002    1300 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 23:00:20 pause-615194 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:00:20 pause-615194 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:00:20 pause-615194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:00:20 pause-615194 systemd[1]: kubelet.service: Consumed 1.427s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-615194 -n pause-615194
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-615194 -n pause-615194: exit status 2 (349.833268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-615194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (242.454803ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:04:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-280530 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-280530 describe deploy/metrics-server -n kube-system: exit status 1 (57.597578ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-280530 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-280530
helpers_test.go:244: (dbg) docker inspect old-k8s-version-280530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	        "Created": "2025-12-10T23:03:39.731784379Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:03:39.777932646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hostname",
	        "HostsPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hosts",
	        "LogPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e-json.log",
	        "Name": "/old-k8s-version-280530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-280530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-280530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	                "LowerDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-280530",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-280530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-280530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5934e95195d186af4f4a2844793862afb7bdcd7c61663369e80b13df8d08952d",
	            "SandboxKey": "/var/run/docker/netns/5934e95195d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-280530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a08a4bae7c4413ec6f525605767e6d6cb6a704250cf4124a75f3ad968a97154c",
	                    "EndpointID": "02bb35bc51a08ba61c3a9b6f97648aef28d3def55f5ff7bd09c7fbbbaccd69da",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:04:0b:cf:71:7b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-280530",
	                        "733a37f892c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25: (1.12630324s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-177285 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo containerd config dump                                                                                                                                                                                                  │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo crio config                                                                                                                                                                                                             │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p cilium-177285                                                                                                                                                                                                                              │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p force-systemd-flag-725815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ force-systemd-flag-725815 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ delete  │ -p force-systemd-flag-725815                                                                                                                                                                                                                  │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530    │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ stop    │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439         │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-280530    │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:03:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:03:48.947755  257827 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:03:48.947874  257827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:48.947885  257827 out.go:374] Setting ErrFile to fd 2...
	I1210 23:03:48.947890  257827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:48.948124  257827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:03:48.948635  257827 out.go:368] Setting JSON to false
	I1210 23:03:48.949740  257827 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2771,"bootTime":1765405058,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:03:48.949803  257827 start.go:143] virtualization: kvm guest
	I1210 23:03:48.951953  257827 out.go:179] * [no-preload-092439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:03:48.953190  257827 notify.go:221] Checking for updates...
	I1210 23:03:48.953194  257827 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:03:48.954508  257827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:03:48.955846  257827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:03:48.957166  257827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:03:48.958377  257827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:03:48.959611  257827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:03:48.961304  257827 config.go:182] Loaded profile config "kubernetes-upgrade-000011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:03:48.961449  257827 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:03:48.961592  257827 config.go:182] Loaded profile config "stopped-upgrade-679204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:03:48.961700  257827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:03:48.984986  257827 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:03:48.985087  257827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:03:49.042813  257827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:03:49.033745424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:03:49.042911  257827 docker.go:319] overlay module found
	I1210 23:03:49.045119  257827 out.go:179] * Using the docker driver based on user configuration
	I1210 23:03:49.046303  257827 start.go:309] selected driver: docker
	I1210 23:03:49.046317  257827 start.go:927] validating driver "docker" against <nil>
	I1210 23:03:49.046331  257827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:03:49.046954  257827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:03:49.104215  257827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:03:49.094252924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:03:49.104446  257827 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:03:49.104755  257827 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:03:49.106507  257827 out.go:179] * Using Docker driver with root privileges
	I1210 23:03:49.107612  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:03:49.107712  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:03:49.107726  257827 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:03:49.107820  257827 start.go:353] cluster config:
	{Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:03:49.109119  257827 out.go:179] * Starting "no-preload-092439" primary control-plane node in "no-preload-092439" cluster
	I1210 23:03:49.110396  257827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:03:49.111539  257827 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:03:49.112594  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:03:49.112702  257827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:03:49.112714  257827 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json ...
	I1210 23:03:49.112744  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json: {Name:mk382929cc2c549a45ba9315a93e1649c33fdf76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:03:49.112889  257827 cache.go:107] acquiring lock: {Name:mk28fded00b2eb43f464ddd8b45bc4e4ec08bb3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112888  257827 cache.go:107] acquiring lock: {Name:mka56d5112841f21b3e7353ebb0e43779ce575dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112927  257827 cache.go:107] acquiring lock: {Name:mk8a6aa013168b15dbefc5af313f4b71504c3f5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112939  257827 cache.go:107] acquiring lock: {Name:mkdab71c46745e396cd56cf0c69b79eb6e9c81f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113009  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 23:03:49.113011  257827 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:49.113012  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 23:03:49.113019  257827 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.105µs
	I1210 23:03:49.113024  257827 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 95.835µs
	I1210 23:03:49.113034  257827 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 23:03:49.113034  257827 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 23:03:49.113022  257827 cache.go:107] acquiring lock: {Name:mkd2ed8297bc2ef6e52c45d6d09784d2954483e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113052  257827 cache.go:107] acquiring lock: {Name:mk4619f034a8ff7e5e9f09c156f5dc84cc50586a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113055  257827 cache.go:107] acquiring lock: {Name:mkfa5ba86b1b79d34dabf8df77d646828c1c0e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113044  257827 cache.go:107] acquiring lock: {Name:mkaebb267ce65474b38251f1ac7bb210058a59c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113094  257827 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:49.113140  257827 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:49.113004  257827 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:49.113218  257827 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:49.113296  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 23:03:49.113307  257827 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 312.729µs
	I1210 23:03:49.113332  257827 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 23:03:49.114248  257827 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:49.114271  257827 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:49.114310  257827 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:49.114400  257827 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:49.114400  257827 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:49.135248  257827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:03:49.135269  257827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:03:49.135283  257827 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:03:49.135308  257827 start.go:360] acquireMachinesLock for no-preload-092439: {Name:mk2bc719b9b9863bdb78b604a641e66b37f2b26f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.135393  257827 start.go:364] duration metric: took 71.683µs to acquireMachinesLock for "no-preload-092439"
	I1210 23:03:49.135416  257827 start.go:93] Provisioning new machine with config: &{Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:03:49.135508  257827 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:03:45.817716  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:03:46.400713  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:46.401080  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:46.401136  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:46.401190  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:46.439436  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:46.439457  218555 cri.go:89] found id: ""
	I1210 23:03:46.439471  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:46.439524  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.443359  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:46.443410  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:46.479759  218555 cri.go:89] found id: ""
	I1210 23:03:46.479781  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.479792  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:46.479800  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:46.479854  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:46.519178  218555 cri.go:89] found id: ""
	I1210 23:03:46.519208  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.519219  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:46.519227  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:46.519282  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:46.556615  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:46.556655  218555 cri.go:89] found id: ""
	I1210 23:03:46.556666  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:46.556730  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.560620  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:46.560707  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:46.598456  218555 cri.go:89] found id: ""
	I1210 23:03:46.598479  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.598489  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:46.598496  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:46.598560  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:46.636028  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:46.636052  218555 cri.go:89] found id: ""
	I1210 23:03:46.636061  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:46.636120  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.639950  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:46.640017  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:46.677011  218555 cri.go:89] found id: ""
	I1210 23:03:46.677038  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.677049  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:46.677058  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:46.677116  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:46.713969  218555 cri.go:89] found id: ""
	I1210 23:03:46.713987  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.713994  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:46.714002  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:46.714014  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:46.783136  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:46.783165  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:46.783188  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:46.825000  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:46.825033  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:46.909356  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:46.909381  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:46.951499  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:46.951577  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:47.015396  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:47.015433  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:47.062000  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:47.062029  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:47.160216  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:47.160244  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:49.679712  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:49.680175  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:49.680241  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:49.680306  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:49.717917  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:49.717940  218555 cri.go:89] found id: ""
	I1210 23:03:49.717950  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:49.718007  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:49.722105  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:49.722175  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:49.770493  218555 cri.go:89] found id: ""
	I1210 23:03:49.770519  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.770530  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:49.770537  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:49.770598  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:49.810810  218555 cri.go:89] found id: ""
	I1210 23:03:49.810835  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.810845  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:49.810852  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:49.810905  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:49.849431  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:49.849456  218555 cri.go:89] found id: ""
	I1210 23:03:49.849466  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:49.849524  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:47.451770  252278 out.go:252]   - Generating certificates and keys ...
	I1210 23:03:47.451848  252278 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:03:47.451927  252278 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:03:47.699264  252278 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:03:48.055765  252278 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:03:48.157671  252278 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:03:48.275034  252278 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:03:48.350884  252278 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:03:48.351036  252278 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-280530] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 23:03:48.516123  252278 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:03:48.516248  252278 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-280530] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 23:03:48.571794  252278 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:03:48.862494  252278 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:03:49.109571  252278 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:03:49.109730  252278 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:03:49.408901  252278 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:03:49.729196  252278 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:03:49.866839  252278 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:03:50.004929  252278 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:03:50.005745  252278 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:03:50.010861  252278 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:03:50.012328  252278 out.go:252]   - Booting up control plane ...
	I1210 23:03:50.012474  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:03:50.012603  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:03:50.013471  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:03:50.030064  252278 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:03:50.031152  252278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:03:50.031221  252278 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:03:50.141703  252278 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 23:03:49.138302  257827 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:03:49.138547  257827 start.go:159] libmachine.API.Create for "no-preload-092439" (driver="docker")
	I1210 23:03:49.138609  257827 client.go:173] LocalClient.Create starting
	I1210 23:03:49.138686  257827 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:03:49.138729  257827 main.go:143] libmachine: Decoding PEM data...
	I1210 23:03:49.138757  257827 main.go:143] libmachine: Parsing certificate...
	I1210 23:03:49.138816  257827 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:03:49.138843  257827 main.go:143] libmachine: Decoding PEM data...
	I1210 23:03:49.138861  257827 main.go:143] libmachine: Parsing certificate...
	I1210 23:03:49.139221  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:03:49.157860  257827 cli_runner.go:211] docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:03:49.157927  257827 network_create.go:284] running [docker network inspect no-preload-092439] to gather additional debugging logs...
	I1210 23:03:49.157951  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439
	W1210 23:03:49.177626  257827 cli_runner.go:211] docker network inspect no-preload-092439 returned with exit code 1
	I1210 23:03:49.177667  257827 network_create.go:287] error running [docker network inspect no-preload-092439]: docker network inspect no-preload-092439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-092439 not found
	I1210 23:03:49.177682  257827 network_create.go:289] output of [docker network inspect no-preload-092439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-092439 not found
	
	** /stderr **
	I1210 23:03:49.177761  257827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:03:49.197092  257827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:03:49.197867  257827 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:03:49.198436  257827 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:03:49.199148  257827 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ba4ba5106fb6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:5d:c2:fb:6c:d4} reservation:<nil>}
	I1210 23:03:49.199550  257827 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a08a4bae7c44 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:f8:26:0e:4e:af} reservation:<nil>}
	I1210 23:03:49.200360  257827 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0ac00}
	I1210 23:03:49.200389  257827 network_create.go:124] attempt to create docker network no-preload-092439 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:03:49.200444  257827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-092439 no-preload-092439
	I1210 23:03:49.251252  257827 network_create.go:108] docker network no-preload-092439 192.168.94.0/24 created
	I1210 23:03:49.251287  257827 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-092439" container
	I1210 23:03:49.251352  257827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:03:49.252788  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:49.260006  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:49.261150  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 23:03:49.263779  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:49.271918  257827 cli_runner.go:164] Run: docker volume create no-preload-092439 --label name.minikube.sigs.k8s.io=no-preload-092439 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:03:49.291289  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 23:03:49.291288  257827 oci.go:103] Successfully created a docker volume no-preload-092439
	I1210 23:03:49.291390  257827 cli_runner.go:164] Run: docker run --rm --name no-preload-092439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-092439 --entrypoint /usr/bin/test -v no-preload-092439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:03:49.661171  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1210 23:03:49.661196  257827 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 548.321549ms
	I1210 23:03:49.661208  257827 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1210 23:03:49.729719  257827 oci.go:107] Successfully prepared a docker volume no-preload-092439
	I1210 23:03:49.729763  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1210 23:03:49.729845  257827 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:03:49.729879  257827 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:03:49.729920  257827 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:03:49.788312  257827 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-092439 --name no-preload-092439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-092439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-092439 --network no-preload-092439 --ip 192.168.94.2 --volume no-preload-092439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:03:50.105072  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Running}}
	I1210 23:03:50.127365  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.149395  257827 cli_runner.go:164] Run: docker exec no-preload-092439 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:03:50.215130  257827 oci.go:144] the created container "no-preload-092439" has a running status.
	I1210 23:03:50.215164  257827 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa...
	I1210 23:03:50.253901  257827 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:03:50.301388  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.335525  257827 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:03:50.335548  257827 kic_runner.go:114] Args: [docker exec --privileged no-preload-092439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:03:50.420583  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.453971  257827 machine.go:94] provisionDockerMachine start ...
	I1210 23:03:50.454071  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:50.487214  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:50.487570  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:50.487590  257827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:03:50.488367  257827 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48786->127.0.0.1:33064: read: connection reset by peer
	I1210 23:03:50.489381  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1210 23:03:50.489487  257827 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.376433556s
	I1210 23:03:50.489515  257827 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1210 23:03:50.581691  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1210 23:03:50.581724  257827 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.468672941s
	I1210 23:03:50.581741  257827 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1210 23:03:50.600525  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1210 23:03:50.600553  257827 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.487532203s
	I1210 23:03:50.600566  257827 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1210 23:03:50.601477  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 23:03:50.601498  257827 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.488575462s
	I1210 23:03:50.601511  257827 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 23:03:50.601527  257827 cache.go:87] Successfully saved all images to host disk.
	I1210 23:03:53.626519  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-092439
	
	I1210 23:03:53.626552  257827 ubuntu.go:182] provisioning hostname "no-preload-092439"
	I1210 23:03:53.626633  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:53.646330  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:53.646636  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:53.646668  257827 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-092439 && echo "no-preload-092439" | sudo tee /etc/hostname
	I1210 23:03:53.791790  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-092439
	
	I1210 23:03:53.791871  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:53.810001  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:53.810284  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:53.810302  257827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-092439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-092439/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-092439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:03:53.944369  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:03:53.944411  257827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:03:53.944463  257827 ubuntu.go:190] setting up certificates
	I1210 23:03:53.944476  257827 provision.go:84] configureAuth start
	I1210 23:03:53.944555  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:50.819468  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 23:03:50.819524  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:50.819576  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:50.845805  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:03:50.845823  215904 cri.go:89] found id: "b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:03:50.845826  215904 cri.go:89] found id: ""
	I1210 23:03:50.845833  215904 logs.go:282] 2 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]
	I1210 23:03:50.845878  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.849783  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.853373  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:50.853434  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:50.878001  215904 cri.go:89] found id: ""
	I1210 23:03:50.878026  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.878036  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:03:50.878047  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:50.878096  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:50.904840  215904 cri.go:89] found id: ""
	I1210 23:03:50.904865  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.904877  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:03:50.904884  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:50.904946  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:50.930819  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:03:50.930842  215904 cri.go:89] found id: ""
	I1210 23:03:50.930852  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:03:50.930914  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.934677  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:50.934740  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:50.959479  215904 cri.go:89] found id: ""
	I1210 23:03:50.959504  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.959514  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:50.959522  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:50.959580  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:50.987741  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:03:50.987759  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:03:50.987763  215904 cri.go:89] found id: ""
	I1210 23:03:50.987769  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:03:50.987816  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.991709  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.995253  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:50.995321  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:51.020889  215904 cri.go:89] found id: ""
	I1210 23:03:51.020912  215904 logs.go:282] 0 containers: []
	W1210 23:03:51.020923  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:51.020931  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:51.020989  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:51.046173  215904 cri.go:89] found id: ""
	I1210 23:03:51.046198  215904 logs.go:282] 0 containers: []
	W1210 23:03:51.046207  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:51.046225  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:51.046238  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:51.091807  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:51.091838  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:51.180080  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:51.180115  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:51.195098  215904 logs.go:123] Gathering logs for kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2] ...
	I1210 23:03:51.195130  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:03:51.225398  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:03:51.225429  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:03:51.252108  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:03:51.252139  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:51.282141  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:51.282176  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 23:03:49.854289  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:49.854369  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:49.891453  218555 cri.go:89] found id: ""
	I1210 23:03:49.891479  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.891489  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:49.891497  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:49.891555  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:49.931613  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:49.931639  218555 cri.go:89] found id: ""
	I1210 23:03:49.931665  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:49.931729  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:49.936618  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:49.936706  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:49.982709  218555 cri.go:89] found id: ""
	I1210 23:03:49.982734  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.982744  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:49.982752  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:49.982817  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:50.023164  218555 cri.go:89] found id: ""
	I1210 23:03:50.023192  218555 logs.go:282] 0 containers: []
	W1210 23:03:50.023202  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:50.023213  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:50.023228  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:50.080763  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:50.080830  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:50.125804  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:50.125830  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:50.236468  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:50.236499  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:50.259638  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:50.260348  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:50.357931  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:50.358061  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:50.358085  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:50.425395  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:50.425427  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:50.536606  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:50.536639  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.085173  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:53.085633  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:53.085726  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:53.085793  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:53.126325  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:53.126350  218555 cri.go:89] found id: ""
	I1210 23:03:53.126369  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:53.126482  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.131634  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:53.131722  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:53.175441  218555 cri.go:89] found id: ""
	I1210 23:03:53.175467  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.175479  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:53.175486  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:53.175546  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:53.217100  218555 cri.go:89] found id: ""
	I1210 23:03:53.217128  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.217139  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:53.217148  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:53.217209  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:53.251001  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:53.251023  218555 cri.go:89] found id: ""
	I1210 23:03:53.251034  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:53.251097  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.254791  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:53.254856  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:53.290240  218555 cri.go:89] found id: ""
	I1210 23:03:53.290266  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.290274  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:53.290281  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:53.290337  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:53.325033  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.325051  218555 cri.go:89] found id: ""
	I1210 23:03:53.325059  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:53.325124  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.328852  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:53.328918  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:53.362349  218555 cri.go:89] found id: ""
	I1210 23:03:53.362375  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.362387  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:53.362395  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:53.362456  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:53.396071  218555 cri.go:89] found id: ""
	I1210 23:03:53.396098  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.396109  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:53.396122  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:53.396140  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:53.412380  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:53.412414  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:53.470614  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:53.470637  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:53.470685  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:53.507634  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:53.507669  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:53.583598  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:53.583626  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.617850  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:53.617876  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:53.665128  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:53.665155  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:53.704910  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:53.704935  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:54.644473  252278 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502932 seconds
	I1210 23:03:54.644689  252278 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:03:54.658221  252278 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:03:55.181120  252278 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:03:55.181310  252278 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-280530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:03:55.692305  252278 kubeadm.go:319] [bootstrap-token] Using token: qidm8r.gttynu6ydc93qzk4
	I1210 23:03:53.963220  257827 provision.go:143] copyHostCerts
	I1210 23:03:53.963291  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:03:53.963303  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:03:53.963371  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:03:53.963470  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:03:53.963484  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:03:53.963515  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:03:53.963572  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:03:53.963582  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:03:53.963604  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:03:53.963670  257827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.no-preload-092439 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-092439]
	I1210 23:03:54.062138  257827 provision.go:177] copyRemoteCerts
	I1210 23:03:54.062221  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:03:54.062275  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.083770  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.185672  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:03:54.207215  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:03:54.229589  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:03:54.251129  257827 provision.go:87] duration metric: took 306.636463ms to configureAuth
	I1210 23:03:54.251156  257827 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:03:54.251360  257827 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:03:54.251497  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.273444  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:54.273732  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:54.273764  257827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:03:54.571798  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:03:54.571821  257827 machine.go:97] duration metric: took 4.117823734s to provisionDockerMachine
	I1210 23:03:54.571833  257827 client.go:176] duration metric: took 5.433213469s to LocalClient.Create
	I1210 23:03:54.571858  257827 start.go:167] duration metric: took 5.433311706s to libmachine.API.Create "no-preload-092439"
	I1210 23:03:54.571868  257827 start.go:293] postStartSetup for "no-preload-092439" (driver="docker")
	I1210 23:03:54.571888  257827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:03:54.571974  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:03:54.572024  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.591589  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.690878  257827 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:03:54.694424  257827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:03:54.694455  257827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:03:54.694469  257827 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:03:54.694526  257827 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:03:54.694607  257827 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:03:54.694725  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:03:54.702953  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:03:54.723820  257827 start.go:296] duration metric: took 151.91875ms for postStartSetup
	I1210 23:03:54.724196  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:54.742824  257827 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json ...
	I1210 23:03:54.743116  257827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:03:54.743157  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.760997  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.855968  257827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:03:54.860665  257827 start.go:128] duration metric: took 5.725126155s to createHost
	I1210 23:03:54.860691  257827 start.go:83] releasing machines lock for "no-preload-092439", held for 5.72528499s
	I1210 23:03:54.860751  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:54.879053  257827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:03:54.879087  257827 ssh_runner.go:195] Run: cat /version.json
	I1210 23:03:54.879133  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.879137  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.897742  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.898734  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:55.047075  257827 ssh_runner.go:195] Run: systemctl --version
	I1210 23:03:55.054209  257827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:03:55.093435  257827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:03:55.098762  257827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:03:55.098836  257827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:03:55.126907  257827 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:03:55.126927  257827 start.go:496] detecting cgroup driver to use...
	I1210 23:03:55.126960  257827 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:03:55.127009  257827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:03:55.143011  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:03:55.155204  257827 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:03:55.155251  257827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:03:55.172053  257827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:03:55.191805  257827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:03:55.273885  257827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:03:55.361951  257827 docker.go:234] disabling docker service ...
	I1210 23:03:55.362016  257827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:03:55.380406  257827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:03:55.393346  257827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:03:55.479164  257827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:03:55.562681  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:03:55.575631  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:03:55.589605  257827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:03:55.589697  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.600244  257827 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:03:55.600293  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.608914  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.617395  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.626015  257827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:03:55.633959  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.642341  257827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.655427  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.663822  257827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:03:55.671325  257827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:03:55.678767  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:03:55.770099  257827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:03:55.919023  257827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:03:55.919094  257827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:03:55.923957  257827 start.go:564] Will wait 60s for crictl version
	I1210 23:03:55.924011  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:55.928256  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:03:55.956121  257827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:03:55.956217  257827 ssh_runner.go:195] Run: crio --version
	I1210 23:03:55.996761  257827 ssh_runner.go:195] Run: crio --version
	I1210 23:03:56.027275  257827 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 23:03:55.693945  252278 out.go:252]   - Configuring RBAC rules ...
	I1210 23:03:55.694092  252278 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:03:55.701582  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:03:55.714571  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:03:55.716335  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:03:55.719381  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:03:55.722732  252278 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:03:55.733519  252278 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:03:55.922348  252278 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:03:56.106768  252278 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:03:56.107629  252278 kubeadm.go:319] 
	I1210 23:03:56.107717  252278 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:03:56.107727  252278 kubeadm.go:319] 
	I1210 23:03:56.107837  252278 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:03:56.107860  252278 kubeadm.go:319] 
	I1210 23:03:56.107904  252278 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:03:56.107967  252278 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:03:56.108016  252278 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:03:56.108022  252278 kubeadm.go:319] 
	I1210 23:03:56.108095  252278 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:03:56.108104  252278 kubeadm.go:319] 
	I1210 23:03:56.108168  252278 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:03:56.108175  252278 kubeadm.go:319] 
	I1210 23:03:56.108237  252278 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:03:56.108308  252278 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:03:56.108363  252278 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:03:56.108373  252278 kubeadm.go:319] 
	I1210 23:03:56.108437  252278 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:03:56.108499  252278 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:03:56.108505  252278 kubeadm.go:319] 
	I1210 23:03:56.108624  252278 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qidm8r.gttynu6ydc93qzk4 \
	I1210 23:03:56.108814  252278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:03:56.108838  252278 kubeadm.go:319] 	--control-plane 
	I1210 23:03:56.108841  252278 kubeadm.go:319] 
	I1210 23:03:56.108914  252278 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:03:56.108920  252278 kubeadm.go:319] 
	I1210 23:03:56.109017  252278 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qidm8r.gttynu6ydc93qzk4 \
	I1210 23:03:56.109153  252278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:03:56.111527  252278 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:03:56.111703  252278 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:03:56.111734  252278 cni.go:84] Creating CNI manager for ""
	I1210 23:03:56.111744  252278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:03:56.114187  252278 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:03:56.028615  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:03:56.045863  257827 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 23:03:56.050005  257827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:03:56.060478  257827 kubeadm.go:884] updating cluster {Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:03:56.060590  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:03:56.060632  257827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:03:56.089941  257827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 23:03:56.089968  257827 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 23:03:56.090052  257827 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.090069  257827 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.090106  257827 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.090135  257827 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.090174  257827 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.090051  257827 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.090256  257827 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.090110  257827 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.091544  257827 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.091621  257827 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.091688  257827 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.091723  257827 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.091544  257827 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.091547  257827 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.091547  257827 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.091893  257827 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.216329  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.219113  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.219421  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.224438  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.233057  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.234505  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.252117  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 23:03:56.280815  257827 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 23:03:56.280864  257827 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.280909  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.280988  257827 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1210 23:03:56.281025  257827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.281109  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.281187  257827 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1210 23:03:56.281233  257827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.281349  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.292861  257827 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1210 23:03:56.292905  257827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.292957  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.296877  257827 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1210 23:03:56.296916  257827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.296962  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.296881  257827 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1210 23:03:56.297013  257827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.297065  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.301523  257827 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 23:03:56.301559  257827 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.301582  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.301598  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.301599  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.301684  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.301708  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.301732  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.301784  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.341885  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.341917  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.341993  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.347614  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.347682  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.347624  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.347727  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.386473  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.386485  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.386539  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.397912  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.398026  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.400527  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.406682  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.426978  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.431534  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 23:03:56.431710  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:03:56.433476  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:56.433599  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:56.442116  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:56.442181  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 23:03:56.442220  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:56.442262  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:03:56.449001  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:56.449101  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:56.457376  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 23:03:56.457456  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:03:56.465332  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 23:03:56.465433  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:56.465445  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 23:03:56.465476  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 23:03:56.465485  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465512  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1210 23:03:56.465535  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465565  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465572  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1210 23:03:56.465581  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1210 23:03:56.465597  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465612  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1210 23:03:56.465665  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 23:03:56.465691  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1210 23:03:56.486595  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 23:03:56.486634  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 23:03:56.622393  257827 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:56.622474  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:57.108726  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:57.129069  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 23:03:57.129114  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:57.129169  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:57.155670  257827 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 23:03:57.155718  257827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:57.155767  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:58.262424  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.133229723s)
	I1210 23:03:58.262453  257827 ssh_runner.go:235] Completed: which crictl: (1.106665082s)
	I1210 23:03:58.262509  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:58.262461  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1210 23:03:58.262567  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:58.262605  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:58.289461  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.302707  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:56.303151  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:56.303210  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:56.303270  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:56.354123  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:56.354161  218555 cri.go:89] found id: ""
	I1210 23:03:56.354171  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:56.354230  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.361844  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:56.361934  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:56.429489  218555 cri.go:89] found id: ""
	I1210 23:03:56.429517  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.429534  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:56.429543  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:56.429605  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:56.485145  218555 cri.go:89] found id: ""
	I1210 23:03:56.485168  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.485188  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:56.485196  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:56.485248  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:56.527598  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:56.527624  218555 cri.go:89] found id: ""
	I1210 23:03:56.527656  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:56.527721  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.531877  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:56.531946  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:56.584604  218555 cri.go:89] found id: ""
	I1210 23:03:56.584633  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.584666  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:56.584682  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:56.584746  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:56.636847  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:56.636869  218555 cri.go:89] found id: ""
	I1210 23:03:56.636882  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:56.636945  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.641998  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:56.642078  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:56.690194  218555 cri.go:89] found id: ""
	I1210 23:03:56.690225  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.690235  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:56.690243  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:56.690308  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:56.745479  218555 cri.go:89] found id: ""
	I1210 23:03:56.745509  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.745520  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:56.745533  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:56.745549  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:56.773917  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:56.773961  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:56.850291  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:56.850315  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:56.850339  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:56.896229  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:56.896261  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:56.970979  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:56.971016  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:57.012693  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:57.012720  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:57.075057  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:57.075102  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:57.129214  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:57.129240  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:59.769754  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:56.115403  252278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:03:56.119690  252278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1210 23:03:56.119706  252278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:03:56.132915  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:03:57.212383  252278 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.079424658s)
	I1210 23:03:57.212423  252278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:03:57.212555  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:57.212637  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-280530 minikube.k8s.io/updated_at=2025_12_10T23_03_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=old-k8s-version-280530 minikube.k8s.io/primary=true
	I1210 23:03:57.224806  252278 ops.go:34] apiserver oom_adj: -16
	I1210 23:03:57.303727  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:57.803773  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:58.304822  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:58.803866  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.303844  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.804341  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:00.304364  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.403106  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.140472788s)
	I1210 23:03:59.403141  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1210 23:03:59.403144  257827 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.11365071s)
	I1210 23:03:59.403177  257827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:03:59.403223  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:59.403224  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:04:00.660482  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.25716445s)
	I1210 23:04:00.660510  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 23:04:00.660532  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:04:00.660538  257827 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.257279408s)
	I1210 23:04:00.660577  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 23:04:00.660578  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:04:00.660672  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:00.664816  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 23:04:00.664851  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 23:04:02.050542  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.389917281s)
	I1210 23:04:02.050569  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1210 23:04:02.050594  257827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:04:02.050673  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:04:03.264896  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.214198674s)
	I1210 23:04:03.264925  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 23:04:03.264962  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:04:03.265015  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:04:01.348582  215904 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066383086s)
	W1210 23:04:01.348624  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 23:04:01.348634  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:01.348665  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:01.387134  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:01.387172  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:01.417234  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:01.417259  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:03.947217  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:04.771523  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 23:04:04.771580  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:04.771638  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:04.814384  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:04.814409  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:04.814415  218555 cri.go:89] found id: ""
	I1210 23:04:04.814424  218555 logs.go:282] 2 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:04:04.814482  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.820086  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.825265  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:04.825341  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:00.803874  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:01.304333  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:01.804505  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:02.304446  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:02.804739  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:03.303757  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:03.803797  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.303828  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.804064  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:05.303793  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.349322  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.084282593s)
	I1210 23:04:04.349348  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1210 23:04:04.349377  257827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:04.349424  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:04.912910  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 23:04:04.912956  257827 cache_images.go:125] Successfully loaded all cached images
	I1210 23:04:04.912963  257827 cache_images.go:94] duration metric: took 8.822978565s to LoadCachedImages
	I1210 23:04:04.912978  257827 kubeadm.go:935] updating node { 192.168.94.2  8443 v1.35.0-beta.0 crio true true} ...
	I1210 23:04:04.913101  257827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-092439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:04:04.913188  257827 ssh_runner.go:195] Run: crio config
	I1210 23:04:04.969467  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:04:04.969494  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:04:04.969516  257827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:04:04.969550  257827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-092439 NodeName:no-preload-092439 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:04:04.969712  257827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-092439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:04:04.969781  257827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:04:04.979515  257827 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1210 23:04:04.979581  257827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:04:04.989346  257827 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1210 23:04:04.989437  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1210 23:04:04.989488  257827 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1210 23:04:04.989507  257827 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1210 23:04:04.994506  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1210 23:04:04.994540  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1210 23:04:05.940488  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:04:05.954029  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1210 23:04:05.958090  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1210 23:04:05.958122  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1210 23:04:06.095516  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1210 23:04:06.101458  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1210 23:04:06.101500  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1210 23:04:06.309174  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:04:06.318411  257827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 23:04:06.332713  257827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 23:04:06.447145  257827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1210 23:04:06.460959  257827 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:04:06.464968  257827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:04:06.475858  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:06.562681  257827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:06.590530  257827 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439 for IP: 192.168.94.2
	I1210 23:04:06.590553  257827 certs.go:195] generating shared ca certs ...
	I1210 23:04:06.590573  257827 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.590751  257827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:04:06.590807  257827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:04:06.590821  257827 certs.go:257] generating profile certs ...
	I1210 23:04:06.590882  257827 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key
	I1210 23:04:06.590910  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt with IP's: []
	I1210 23:04:06.679320  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt ...
	I1210 23:04:06.679356  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt: {Name:mk6e999ddf9fb4e249c890267ece03810e3898c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.679595  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key ...
	I1210 23:04:06.679616  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key: {Name:mk2e9c19d38df27e7f3571b8ba29f662af106455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.679772  257827 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d
	I1210 23:04:06.679797  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1210 23:04:06.892693  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d ...
	I1210 23:04:06.892719  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d: {Name:mk1979b8721bfea485b133d2aa14d24a9ab2e0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.892877  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d ...
	I1210 23:04:06.892892  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d: {Name:mkaf4c646957b7644aec53558fd1134dd056f4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.892973  257827 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt
	I1210 23:04:06.893058  257827 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key
	I1210 23:04:06.893137  257827 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key
	I1210 23:04:06.893154  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt with IP's: []
	I1210 23:04:06.949878  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt ...
	I1210 23:04:06.949905  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt: {Name:mkc46ec8a783c13fcfec4a1a70fed06549840b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.950064  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key ...
	I1210 23:04:06.950077  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key: {Name:mk141f21e851fac890a6275a10731cc4766b17cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.950248  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:04:06.950289  257827 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:04:06.950299  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:04:06.950323  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:04:06.950364  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:04:06.950387  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:04:06.950424  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:04:06.951027  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:04:06.970688  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:04:06.988617  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:04:07.006629  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:04:07.024964  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 23:04:07.042748  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:04:07.060192  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:04:07.077920  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 23:04:07.095394  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:04:07.117344  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:04:07.136548  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:04:07.155327  257827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:04:07.168880  257827 ssh_runner.go:195] Run: openssl version
	I1210 23:04:07.175126  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.182711  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:04:07.190453  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.194350  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.194399  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.229862  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:04:07.238436  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:04:07.246102  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.253589  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:04:07.261136  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.265071  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.265127  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.299174  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:04:07.307493  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:04:07.315601  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.323376  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:04:07.331271  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.335133  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.335215  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.373366  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:04:07.381239  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:04:07.388791  257827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:04:07.392512  257827 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:04:07.392562  257827 kubeadm.go:401] StartCluster: {Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:04:07.392638  257827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:04:07.392699  257827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:04:07.421340  257827 cri.go:89] found id: ""
	I1210 23:04:07.421407  257827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:04:07.430132  257827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:04:07.438656  257827 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:04:07.438718  257827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:04:07.447325  257827 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:04:07.447355  257827 kubeadm.go:158] found existing configuration files:
	
	I1210 23:04:07.447401  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:04:07.455875  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:04:07.455933  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:04:07.464244  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:04:07.472284  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:04:07.472344  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:04:07.479571  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:04:07.487086  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:04:07.487148  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:04:07.494438  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:04:07.502227  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:04:07.502281  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:04:07.509860  257827 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:04:07.546842  257827 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 23:04:07.546934  257827 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:04:07.609756  257827 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:04:07.609889  257827 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:04:07.609957  257827 kubeadm.go:319] OS: Linux
	I1210 23:04:07.610000  257827 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:04:07.610074  257827 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:04:07.610139  257827 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:04:07.610213  257827 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:04:07.610303  257827 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:04:07.610383  257827 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:04:07.610433  257827 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:04:07.610481  257827 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:04:07.667693  257827 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:04:07.667845  257827 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:04:07.667982  257827 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:04:07.680343  257827 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:04:07.682304  257827 out.go:252]   - Generating certificates and keys ...
	I1210 23:04:07.682417  257827 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:04:07.682537  257827 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:04:07.763288  257827 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:04:07.801270  257827 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:04:07.834237  257827 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:04:07.929206  257827 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:04:08.043149  257827 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:04:08.043332  257827 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-092439] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:04:08.231301  257827 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:04:08.231511  257827 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-092439] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:04:08.370113  257827 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:04:08.455113  257827 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:04:08.539395  257827 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:04:08.539523  257827 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:04:08.647549  257827 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:04:08.744162  257827 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:04:08.892449  257827 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:04:09.052806  257827 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:04:09.068716  257827 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:04:09.069449  257827 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:04:09.073614  257827 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:04:04.736558  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:33534->192.168.103.2:8443: read: connection reset by peer
	I1210 23:04:04.736629  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:04.736715  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:04.766262  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:04.766283  215904 cri.go:89] found id: "b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:04:04.766292  215904 cri.go:89] found id: ""
	I1210 23:04:04.766300  215904 logs.go:282] 2 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]
	I1210 23:04:04.766365  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.770528  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.774565  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:04.774629  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:04.807796  215904 cri.go:89] found id: ""
	I1210 23:04:04.807821  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.807832  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:04.807841  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:04.807897  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:04.846478  215904 cri.go:89] found id: ""
	I1210 23:04:04.846505  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.846521  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:04.846529  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:04.846595  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:04.881898  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:04.881922  215904 cri.go:89] found id: ""
	I1210 23:04:04.881932  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:04.881988  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.886959  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:04.887035  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:04.918619  215904 cri.go:89] found id: ""
	I1210 23:04:04.918639  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.918669  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:04.918677  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:04.918725  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:04.946483  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:04.946500  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:04.946504  215904 cri.go:89] found id: ""
	I1210 23:04:04.946510  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:04:04.946554  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.951076  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.956120  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:04.956176  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:04.986385  215904 cri.go:89] found id: ""
	I1210 23:04:04.986409  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.986427  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:04.986434  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:04.986494  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:05.018045  215904 cri.go:89] found id: ""
	I1210 23:04:05.018070  215904 logs.go:282] 0 containers: []
	W1210 23:04:05.018080  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:05.018098  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:05.018113  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:05.057611  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:05.057653  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:05.094938  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:05.094980  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:05.133407  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:05.133434  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:05.173626  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:05.173677  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:05.213162  215904 logs.go:123] Gathering logs for kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2] ...
	I1210 23:04:05.213193  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	W1210 23:04:05.243729  215904 logs.go:130] failed kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2": Process exited with status 1
	stdout:
	
	stderr:
	E1210 23:04:05.241175    6038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist" containerID="b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	time="2025-12-10T23:04:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1210 23:04:05.241175    6038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist" containerID="b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	time="2025-12-10T23:04:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist"
	
	** /stderr **
	I1210 23:04:05.243749  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:05.243764  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:05.301919  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:05.301951  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:05.407800  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:05.407829  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:05.425479  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:05.425508  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:05.488947  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:07.990337  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:07.990742  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:07.990799  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:07.990846  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:08.020486  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:08.020506  215904 cri.go:89] found id: ""
	I1210 23:04:08.020514  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:08.020557  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.024490  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:08.024540  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:08.051556  215904 cri.go:89] found id: ""
	I1210 23:04:08.051579  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.051590  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:08.051599  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:08.051686  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:08.079097  215904 cri.go:89] found id: ""
	I1210 23:04:08.079126  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.079138  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:08.079147  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:08.079207  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:08.106624  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:08.106716  215904 cri.go:89] found id: ""
	I1210 23:04:08.106738  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:08.106793  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.110755  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:08.110819  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:08.137616  215904 cri.go:89] found id: ""
	I1210 23:04:08.137655  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.137668  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:08.137676  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:08.137739  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:08.165553  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:08.165578  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:08.165584  215904 cri.go:89] found id: ""
	I1210 23:04:08.165593  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:04:08.165669  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.169908  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.173777  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:08.173843  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:08.208979  215904 cri.go:89] found id: ""
	I1210 23:04:08.209002  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.209013  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:08.209020  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:08.209074  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:08.237600  215904 cri.go:89] found id: ""
	I1210 23:04:08.237625  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.237636  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:08.237682  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:08.237703  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:08.254453  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:08.254491  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:08.311903  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:08.311923  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:08.311938  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:08.345268  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:08.345295  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:08.375988  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:08.376018  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:08.404349  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:08.404377  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:08.434947  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:08.434973  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:08.517221  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:08.517267  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:08.552270  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:08.552299  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:05.804802  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:06.303770  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:06.804356  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:07.304719  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:07.804695  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:08.303769  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:08.804436  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:09.304364  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:09.374894  252278 kubeadm.go:1114] duration metric: took 12.162385999s to wait for elevateKubeSystemPrivileges
	I1210 23:04:09.374938  252278 kubeadm.go:403] duration metric: took 22.35389953s to StartCluster
	I1210 23:04:09.374962  252278 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:09.375036  252278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:04:09.376119  252278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:09.376380  252278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:04:09.376403  252278 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:04:09.376823  252278 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:04:09.376904  252278 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:04:09.376914  252278 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-280530"
	I1210 23:04:09.376932  252278 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-280530"
	I1210 23:04:09.376950  252278 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-280530"
	I1210 23:04:09.376964  252278 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:04:09.376965  252278 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-280530"
	I1210 23:04:09.377323  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.377496  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.378843  252278 out.go:179] * Verifying Kubernetes components...
	I1210 23:04:09.380112  252278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:09.404782  252278 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:04:04.871973  218555 cri.go:89] found id: ""
	I1210 23:04:04.872000  218555 logs.go:282] 0 containers: []
	W1210 23:04:04.872010  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:04.872015  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:04.872075  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:04.915030  218555 cri.go:89] found id: ""
	I1210 23:04:04.915058  218555 logs.go:282] 0 containers: []
	W1210 23:04:04.915069  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:04.915078  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:04.915137  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:04.955844  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:04.955868  218555 cri.go:89] found id: ""
	I1210 23:04:04.955877  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:04.955933  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.959845  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:04.959895  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:05.002577  218555 cri.go:89] found id: ""
	I1210 23:04:05.002607  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.002617  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:05.002626  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:05.002698  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:05.046807  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:05.046829  218555 cri.go:89] found id: ""
	I1210 23:04:05.046839  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:05.046900  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:05.051532  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:05.051595  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:05.104937  218555 cri.go:89] found id: ""
	I1210 23:04:05.104966  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.104976  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:05.104984  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:05.105050  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:05.156907  218555 cri.go:89] found id: ""
	I1210 23:04:05.157151  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.157174  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:05.157193  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:05.157228  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:05.275392  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:05.275428  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 23:04:09.405004  252278 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-280530"
	I1210 23:04:09.405047  252278 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:04:09.405501  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.406343  252278 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:09.406365  252278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:04:09.406431  252278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280530
	I1210 23:04:09.436330  252278 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:09.436413  252278 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:04:09.436568  252278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280530
	I1210 23:04:09.442122  252278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/old-k8s-version-280530/id_rsa Username:docker}
	I1210 23:04:09.469470  252278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/old-k8s-version-280530/id_rsa Username:docker}
	I1210 23:04:09.489566  252278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:04:09.540605  252278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:09.558840  252278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:09.582939  252278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:09.720245  252278 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 23:04:09.721449  252278 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-280530" to be "Ready" ...
	I1210 23:04:09.956687  252278 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:04:09.957810  252278 addons.go:530] duration metric: took 580.985826ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:04:10.225579  252278 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-280530" context rescaled to 1 replicas
	I1210 23:04:09.075098  257827 out.go:252]   - Booting up control plane ...
	I1210 23:04:09.075188  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:04:09.075276  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:04:09.076239  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:04:09.090309  257827 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:04:09.090418  257827 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:04:09.097006  257827 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:04:09.097227  257827 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:04:09.097317  257827 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:04:09.201119  257827 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:04:09.201254  257827 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:04:10.203826  257827 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002380824s
	I1210 23:04:10.208576  257827 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:04:10.208724  257827 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 23:04:10.208851  257827 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:04:10.208957  257827 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:04:10.713840  257827 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.113715ms
	I1210 23:04:12.357039  257827 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.148380489s
	I1210 23:04:11.108288  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:11.108725  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:11.108786  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:11.108841  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:11.137853  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:11.137874  215904 cri.go:89] found id: ""
	I1210 23:04:11.137883  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:11.137942  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.142681  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:11.142757  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:11.170313  215904 cri.go:89] found id: ""
	I1210 23:04:11.170340  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.170352  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:11.170360  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:11.170417  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:11.198253  215904 cri.go:89] found id: ""
	I1210 23:04:11.198275  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.198285  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:11.198292  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:11.198359  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:11.228495  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:11.228519  215904 cri.go:89] found id: ""
	I1210 23:04:11.228528  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:11.228584  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.233253  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:11.233319  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:11.260462  215904 cri.go:89] found id: ""
	I1210 23:04:11.260485  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.260493  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:11.260499  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:11.260554  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:11.287583  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:11.287601  215904 cri.go:89] found id: ""
	I1210 23:04:11.287608  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:11.287672  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.291507  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:11.291565  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:11.317608  215904 cri.go:89] found id: ""
	I1210 23:04:11.317634  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.317658  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:11.317666  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:11.317727  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:11.344042  215904 cri.go:89] found id: ""
	I1210 23:04:11.344064  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.344072  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:11.344082  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:11.344094  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:11.374057  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:11.374085  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:11.399166  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:11.399191  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:11.428446  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:11.428476  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:11.488778  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:11.488808  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:11.522188  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:11.522220  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:11.627739  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:11.627771  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:11.647722  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:11.647752  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:11.714232  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:14.210770  257827 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002132406s
	I1210 23:04:14.229198  257827 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:04:14.240219  257827 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:04:14.250123  257827 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:04:14.250376  257827 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-092439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:04:14.258231  257827 kubeadm.go:319] [bootstrap-token] Using token: c62cuz.u4c8h8kjomii0rr4
	W1210 23:04:11.724319  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	W1210 23:04:13.724674  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:14.259935  257827 out.go:252]   - Configuring RBAC rules ...
	I1210 23:04:14.260078  257827 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:04:14.263057  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:04:14.268157  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:04:14.270677  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:04:14.274177  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:04:14.276735  257827 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:04:14.616365  257827 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:04:15.034562  257827 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:04:15.616168  257827 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:04:15.617273  257827 kubeadm.go:319] 
	I1210 23:04:15.617402  257827 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:04:15.617423  257827 kubeadm.go:319] 
	I1210 23:04:15.617529  257827 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:04:15.617540  257827 kubeadm.go:319] 
	I1210 23:04:15.617575  257827 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:04:15.617719  257827 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:04:15.617796  257827 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:04:15.617805  257827 kubeadm.go:319] 
	I1210 23:04:15.617905  257827 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:04:15.617926  257827 kubeadm.go:319] 
	I1210 23:04:15.617994  257827 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:04:15.618004  257827 kubeadm.go:319] 
	I1210 23:04:15.618087  257827 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:04:15.618185  257827 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:04:15.618305  257827 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:04:15.618318  257827 kubeadm.go:319] 
	I1210 23:04:15.618447  257827 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:04:15.618550  257827 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:04:15.618558  257827 kubeadm.go:319] 
	I1210 23:04:15.618678  257827 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c62cuz.u4c8h8kjomii0rr4 \
	I1210 23:04:15.618829  257827 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:04:15.618884  257827 kubeadm.go:319] 	--control-plane 
	I1210 23:04:15.618893  257827 kubeadm.go:319] 
	I1210 23:04:15.619011  257827 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:04:15.619030  257827 kubeadm.go:319] 
	I1210 23:04:15.619155  257827 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c62cuz.u4c8h8kjomii0rr4 \
	I1210 23:04:15.619330  257827 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:04:15.620915  257827 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:04:15.621015  257827 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:04:15.621044  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:04:15.621055  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:04:15.622692  257827 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:04:15.624109  257827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:04:15.628452  257827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 23:04:15.628471  257827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:04:15.643562  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:04:15.849052  257827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:04:15.849142  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:15.849158  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-092439 minikube.k8s.io/updated_at=2025_12_10T23_04_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=no-preload-092439 minikube.k8s.io/primary=true
	I1210 23:04:15.939485  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:15.939526  257827 ops.go:34] apiserver oom_adj: -16
	I1210 23:04:16.440475  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:16.939769  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:17.440424  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:17.940471  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:18.440380  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:18.939972  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:14.214907  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:14.215310  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:14.215355  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:14.215400  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:14.247354  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:14.247377  215904 cri.go:89] found id: ""
	I1210 23:04:14.247386  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:14.247460  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.252422  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:14.252493  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:14.282161  215904 cri.go:89] found id: ""
	I1210 23:04:14.282185  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.282197  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:14.282205  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:14.282258  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:14.309933  215904 cri.go:89] found id: ""
	I1210 23:04:14.309957  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.309975  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:14.309981  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:14.310036  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:14.339819  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:14.339845  215904 cri.go:89] found id: ""
	I1210 23:04:14.339855  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:14.339913  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.345157  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:14.345231  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:14.376119  215904 cri.go:89] found id: ""
	I1210 23:04:14.376140  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.376153  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:14.376159  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:14.376210  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:14.408446  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:14.408465  215904 cri.go:89] found id: ""
	I1210 23:04:14.408473  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:14.408524  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.413095  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:14.413169  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:14.441278  215904 cri.go:89] found id: ""
	I1210 23:04:14.441306  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.441317  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:14.441326  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:14.441393  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:14.469271  215904 cri.go:89] found id: ""
	I1210 23:04:14.469294  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.469304  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:14.469316  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:14.469329  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:14.498656  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:14.498685  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:14.585001  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:14.585032  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:14.600268  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:14.600293  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:14.672208  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:14.672232  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:14.672252  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:14.704082  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:14.704121  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:14.732691  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:14.732724  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:14.761212  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:14.761239  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:17.315740  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:17.316149  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:17.316200  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:17.316250  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:17.343893  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:17.343917  215904 cri.go:89] found id: ""
	I1210 23:04:17.343926  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:17.343985  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.347771  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:17.347836  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:17.375349  215904 cri.go:89] found id: ""
	I1210 23:04:17.375373  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.375381  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:17.375389  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:17.375445  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:17.402671  215904 cri.go:89] found id: ""
	I1210 23:04:17.402694  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.402702  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:17.402708  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:17.402751  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:17.428187  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:17.428211  215904 cri.go:89] found id: ""
	I1210 23:04:17.428219  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:17.428265  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.432134  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:17.432195  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:17.460760  215904 cri.go:89] found id: ""
	I1210 23:04:17.460786  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.460797  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:17.460804  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:17.460880  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:17.490355  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:17.490384  215904 cri.go:89] found id: ""
	I1210 23:04:17.490392  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:17.490450  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.494661  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:17.494723  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:17.523365  215904 cri.go:89] found id: ""
	I1210 23:04:17.523392  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.523401  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:17.523406  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:17.523454  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:17.551475  215904 cri.go:89] found id: ""
	I1210 23:04:17.551502  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.551517  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:17.551528  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:17.551542  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:17.577804  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:17.577829  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:17.625326  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:17.625356  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:17.656176  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:17.656209  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:17.744974  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:17.745012  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:17.760342  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:17.760367  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:17.816031  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:17.816058  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:17.816074  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:17.848696  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:17.848722  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:15.350708  218555 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.075259648s)
	W1210 23:04:15.350745  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 23:04:15.350755  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:15.350778  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:15.429494  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:15.429529  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:15.446885  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:15.446911  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:15.484414  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:04:15.484451  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:15.522898  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:15.522928  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:15.557158  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:15.557188  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:15.604917  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:15.604959  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:18.146306  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:18.146762  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:18.146824  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:18.146888  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:18.183129  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:18.183161  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:18.183167  218555 cri.go:89] found id: ""
	I1210 23:04:18.183177  218555 logs.go:282] 2 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:04:18.183253  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.187317  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.190819  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:18.190880  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:18.224530  218555 cri.go:89] found id: ""
	I1210 23:04:18.224553  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.224564  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:18.224571  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:18.224627  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:18.263252  218555 cri.go:89] found id: ""
	I1210 23:04:18.263281  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.263293  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:18.263301  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:18.263370  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:18.298950  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:18.298973  218555 cri.go:89] found id: ""
	I1210 23:04:18.298983  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:18.299039  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.302729  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:18.302779  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:18.337305  218555 cri.go:89] found id: ""
	I1210 23:04:18.337330  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.337340  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:18.337347  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:18.337410  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:18.371265  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:18.371290  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:18.371297  218555 cri.go:89] found id: ""
	I1210 23:04:18.371307  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:18.371361  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.375054  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.378512  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:18.378555  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:18.412265  218555 cri.go:89] found id: ""
	I1210 23:04:18.412286  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.412294  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:18.412300  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:18.412356  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:18.446822  218555 cri.go:89] found id: ""
	I1210 23:04:18.446844  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.446852  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:18.446868  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:18.446883  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:18.500892  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:18.500926  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:18.601504  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:18.601543  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:18.635703  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:18.635745  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:18.674968  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:18.675001  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:18.691746  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:18.691771  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:18.751864  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:18.751885  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:18.751897  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:18.789927  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:04:18.789956  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	W1210 23:04:18.824468  218555 logs.go:130] failed kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9": Process exited with status 1
	stdout:
	
	stderr:
	E1210 23:04:18.821978    6181 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist" containerID="8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	time="2025-12-10T23:04:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1210 23:04:18.821978    6181 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist" containerID="8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	time="2025-12-10T23:04:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist"
	
	** /stderr **
	I1210 23:04:18.824489  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:18.824501  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:18.899449  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:18.899486  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	W1210 23:04:15.724718  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	W1210 23:04:18.225055  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:19.439513  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:19.939749  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:20.439743  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:20.516682  257827 kubeadm.go:1114] duration metric: took 4.6676217s to wait for elevateKubeSystemPrivileges
	I1210 23:04:20.516746  257827 kubeadm.go:403] duration metric: took 13.124163827s to StartCluster
	I1210 23:04:20.516771  257827 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:20.516843  257827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:04:20.518173  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:20.518415  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:04:20.518446  257827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:04:20.518505  257827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:04:20.518586  257827 addons.go:70] Setting storage-provisioner=true in profile "no-preload-092439"
	I1210 23:04:20.518608  257827 addons.go:239] Setting addon storage-provisioner=true in "no-preload-092439"
	I1210 23:04:20.518613  257827 addons.go:70] Setting default-storageclass=true in profile "no-preload-092439"
	I1210 23:04:20.518637  257827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-092439"
	I1210 23:04:20.518653  257827 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:04:20.518701  257827 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:04:20.518961  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.519183  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.520383  257827 out.go:179] * Verifying Kubernetes components...
	I1210 23:04:20.522842  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:20.544345  257827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:04:20.545686  257827 addons.go:239] Setting addon default-storageclass=true in "no-preload-092439"
	I1210 23:04:20.545727  257827 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:04:20.546139  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.546152  257827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:20.546171  257827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:04:20.546226  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:04:20.575111  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:04:20.577631  257827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:20.577662  257827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:04:20.577727  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:04:20.601974  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:04:20.613891  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:04:20.679131  257827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:20.697026  257827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:20.717697  257827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:20.827240  257827 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1210 23:04:20.829161  257827 node_ready.go:35] waiting up to 6m0s for node "no-preload-092439" to be "Ready" ...
	I1210 23:04:21.069799  257827 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:04:21.071186  257827 addons.go:530] duration metric: took 552.673088ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:04:21.333093  257827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-092439" context rescaled to 1 replicas
	W1210 23:04:22.832392  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	I1210 23:04:20.377720  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:20.378221  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:20.378280  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:20.378341  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:20.405573  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:20.405596  215904 cri.go:89] found id: ""
	I1210 23:04:20.405604  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:20.405677  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.409672  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:20.409728  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:20.435730  215904 cri.go:89] found id: ""
	I1210 23:04:20.435763  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.435775  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:20.435784  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:20.435840  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:20.471327  215904 cri.go:89] found id: ""
	I1210 23:04:20.471354  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.471365  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:20.471373  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:20.471431  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:20.503868  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:20.503894  215904 cri.go:89] found id: ""
	I1210 23:04:20.503904  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:20.503961  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.508945  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:20.509011  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:20.543224  215904 cri.go:89] found id: ""
	I1210 23:04:20.543252  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.543263  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:20.543270  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:20.543330  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:20.588065  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:20.588089  215904 cri.go:89] found id: ""
	I1210 23:04:20.588099  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:20.588153  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.593043  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:20.593108  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:20.627151  215904 cri.go:89] found id: ""
	I1210 23:04:20.627177  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.627192  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:20.627200  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:20.627258  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:20.661786  215904 cri.go:89] found id: ""
	I1210 23:04:20.661810  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.661821  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:20.661832  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:20.661848  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:20.699119  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:20.699149  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:20.736189  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:20.736229  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:20.804403  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:20.804454  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:20.848630  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:20.848690  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:20.968517  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:20.968547  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:20.985573  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:20.985601  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:21.075156  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:21.075180  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:21.075194  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:23.612780  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:23.613189  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:23.613238  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:23.613289  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:23.639136  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:23.639154  215904 cri.go:89] found id: ""
	I1210 23:04:23.639161  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:23.639214  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.643284  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:23.643348  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:23.670005  215904 cri.go:89] found id: ""
	I1210 23:04:23.670029  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.670039  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:23.670047  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:23.670122  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:23.697062  215904 cri.go:89] found id: ""
	I1210 23:04:23.697082  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.697090  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:23.697095  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:23.697151  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:23.724278  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:23.724295  215904 cri.go:89] found id: ""
	I1210 23:04:23.724302  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:23.724346  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.728182  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:23.728260  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:23.754134  215904 cri.go:89] found id: ""
	I1210 23:04:23.754156  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.754166  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:23.754182  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:23.754240  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:23.780985  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:23.781006  215904 cri.go:89] found id: ""
	I1210 23:04:23.781013  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:23.781057  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.785047  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:23.785116  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:23.813633  215904 cri.go:89] found id: ""
	I1210 23:04:23.813671  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.813683  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:23.813692  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:23.813742  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:23.841172  215904 cri.go:89] found id: ""
	I1210 23:04:23.841198  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.841206  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:23.841217  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:23.841228  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:23.926005  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:23.926053  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:23.941148  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:23.941176  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:23.997042  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:23.997069  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:23.997090  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:24.026943  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:24.026970  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:24.053337  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:24.053364  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:24.080338  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:24.080368  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:24.131017  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:24.131050  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:21.434291  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:21.434794  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:21.434855  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:21.434915  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:21.483862  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:21.483889  218555 cri.go:89] found id: ""
	I1210 23:04:21.483899  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:21.483963  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.488277  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:21.488345  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:21.530619  218555 cri.go:89] found id: ""
	I1210 23:04:21.530654  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.530664  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:21.530672  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:21.530735  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:21.578503  218555 cri.go:89] found id: ""
	I1210 23:04:21.578532  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.578543  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:21.578551  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:21.578613  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:21.620112  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:21.620133  218555 cri.go:89] found id: ""
	I1210 23:04:21.620142  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:21.620193  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.624047  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:21.624124  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:21.665733  218555 cri.go:89] found id: ""
	I1210 23:04:21.665773  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.665783  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:21.665792  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:21.665853  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:21.703400  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:21.703424  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:21.703430  218555 cri.go:89] found id: ""
	I1210 23:04:21.703439  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:21.703502  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.708298  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.712940  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:21.713037  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:21.757517  218555 cri.go:89] found id: ""
	I1210 23:04:21.757545  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.757556  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:21.757565  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:21.757620  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:21.802700  218555 cri.go:89] found id: ""
	I1210 23:04:21.802728  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.802739  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:21.802758  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:21.802772  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:21.824310  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:21.824346  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:21.910114  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:21.910136  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:21.910151  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:21.957023  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:21.957055  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:22.032034  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:22.032065  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:22.079037  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:22.079075  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:22.121144  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:22.121169  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:22.206276  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:22.206319  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:22.256540  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:22.256607  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 23:04:20.726175  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:22.227596  252278 node_ready.go:49] node "old-k8s-version-280530" is "Ready"
	I1210 23:04:22.227630  252278 node_ready.go:38] duration metric: took 12.506150778s for node "old-k8s-version-280530" to be "Ready" ...
	I1210 23:04:22.227668  252278 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:04:22.227794  252278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:04:22.246236  252278 api_server.go:72] duration metric: took 12.869795533s to wait for apiserver process to appear ...
	I1210 23:04:22.246414  252278 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:04:22.246443  252278 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:04:22.252360  252278 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:04:22.253663  252278 api_server.go:141] control plane version: v1.28.0
	I1210 23:04:22.253690  252278 api_server.go:131] duration metric: took 7.264266ms to wait for apiserver health ...
	I1210 23:04:22.253701  252278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:04:22.259514  252278 system_pods.go:59] 8 kube-system pods found
	I1210 23:04:22.259562  252278 system_pods.go:61] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.259570  252278 system_pods.go:61] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.259577  252278 system_pods.go:61] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.259583  252278 system_pods.go:61] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.259593  252278 system_pods.go:61] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.259603  252278 system_pods.go:61] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.259608  252278 system_pods.go:61] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.259615  252278 system_pods.go:61] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.259623  252278 system_pods.go:74] duration metric: took 5.914903ms to wait for pod list to return data ...
	I1210 23:04:22.259653  252278 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:04:22.262517  252278 default_sa.go:45] found service account: "default"
	I1210 23:04:22.262539  252278 default_sa.go:55] duration metric: took 2.87884ms for default service account to be created ...
	I1210 23:04:22.262566  252278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:04:22.266939  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.266973  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.266981  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.266989  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.266994  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.267000  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.267005  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.267010  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.267016  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.267039  252278 retry.go:31] will retry after 232.170594ms: missing components: kube-dns
	I1210 23:04:22.503158  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.503188  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.503193  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.503200  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.503203  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.503207  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.503210  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.503214  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.503219  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.503234  252278 retry.go:31] will retry after 310.786078ms: missing components: kube-dns
	I1210 23:04:22.818459  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.818489  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.818494  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.818500  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.818504  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.818508  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.818511  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.818520  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.818524  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.818543  252278 retry.go:31] will retry after 438.77602ms: missing components: kube-dns
	I1210 23:04:23.261929  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:23.261954  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Running
	I1210 23:04:23.261960  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:23.261963  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:23.261971  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:23.261975  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:23.261980  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:23.261985  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:23.261990  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Running
	I1210 23:04:23.262000  252278 system_pods.go:126] duration metric: took 999.423474ms to wait for k8s-apps to be running ...
	I1210 23:04:23.262015  252278 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:04:23.262065  252278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:04:23.275623  252278 system_svc.go:56] duration metric: took 13.599963ms WaitForService to wait for kubelet
	I1210 23:04:23.275661  252278 kubeadm.go:587] duration metric: took 13.899211788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:04:23.275683  252278 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:04:23.278190  252278 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:04:23.278213  252278 node_conditions.go:123] node cpu capacity is 8
	I1210 23:04:23.278228  252278 node_conditions.go:105] duration metric: took 2.539808ms to run NodePressure ...
	I1210 23:04:23.278240  252278 start.go:242] waiting for startup goroutines ...
	I1210 23:04:23.278247  252278 start.go:247] waiting for cluster config update ...
	I1210 23:04:23.278257  252278 start.go:256] writing updated cluster config ...
	I1210 23:04:23.278512  252278 ssh_runner.go:195] Run: rm -f paused
	I1210 23:04:23.282247  252278 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:23.285903  252278 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.290252  252278 pod_ready.go:94] pod "coredns-5dd5756b68-6mzkn" is "Ready"
	I1210 23:04:23.290271  252278 pod_ready.go:86] duration metric: took 4.345986ms for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.292604  252278 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.296200  252278 pod_ready.go:94] pod "etcd-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.296217  252278 pod_ready.go:86] duration metric: took 3.594598ms for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.298786  252278 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.302170  252278 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.302191  252278 pod_ready.go:86] duration metric: took 3.382379ms for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.304575  252278 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.686161  252278 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.686188  252278 pod_ready.go:86] duration metric: took 381.597868ms for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.887143  252278 pod_ready.go:83] waiting for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.287298  252278 pod_ready.go:94] pod "kube-proxy-nvgl4" is "Ready"
	I1210 23:04:24.287321  252278 pod_ready.go:86] duration metric: took 400.155224ms for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.487315  252278 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.886853  252278 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-280530" is "Ready"
	I1210 23:04:24.886878  252278 pod_ready.go:86] duration metric: took 399.53855ms for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.886893  252278 pod_ready.go:40] duration metric: took 1.604622158s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:24.939852  252278 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 23:04:24.941685  252278 out.go:203] 
	W1210 23:04:24.942850  252278 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 23:04:24.944090  252278 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 23:04:24.945963  252278 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-280530" cluster and "default" namespace by default
	W1210 23:04:25.332176  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	W1210 23:04:27.832487  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	I1210 23:04:26.662713  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:26.663201  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:26.663259  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:26.663312  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:26.700771  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:26.700796  215904 cri.go:89] found id: ""
	I1210 23:04:26.700805  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:26.700851  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.705227  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:26.705304  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:26.739982  215904 cri.go:89] found id: ""
	I1210 23:04:26.740009  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.740022  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:26.740030  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:26.740096  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:26.772637  215904 cri.go:89] found id: ""
	I1210 23:04:26.772690  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.772700  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:26.772706  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:26.772754  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:26.801207  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:26.801226  215904 cri.go:89] found id: ""
	I1210 23:04:26.801233  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:26.801279  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.805308  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:26.805374  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:26.834182  215904 cri.go:89] found id: ""
	I1210 23:04:26.834202  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.834210  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:26.834215  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:26.834259  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:26.862361  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:26.862386  215904 cri.go:89] found id: ""
	I1210 23:04:26.862396  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:26.862454  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.867248  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:26.867323  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:26.895929  215904 cri.go:89] found id: ""
	I1210 23:04:26.895957  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.895966  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:26.895972  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:26.896024  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:26.923094  215904 cri.go:89] found id: ""
	I1210 23:04:26.923118  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.923127  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:26.923137  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:26.923150  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:26.970888  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:26.970921  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:27.001389  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:27.001426  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:27.092258  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:27.092289  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:27.107514  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:27.107539  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:27.164299  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:27.164320  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:27.164333  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:27.195053  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:27.195081  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:27.222683  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:27.222714  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:24.853720  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:24.854133  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:24.854186  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:24.854248  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:24.890336  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:24.890365  218555 cri.go:89] found id: ""
	I1210 23:04:24.890375  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:24.890433  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:24.894437  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:24.894493  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:24.935833  218555 cri.go:89] found id: ""
	I1210 23:04:24.935860  218555 logs.go:282] 0 containers: []
	W1210 23:04:24.935871  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:24.935879  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:24.935934  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:24.978365  218555 cri.go:89] found id: ""
	I1210 23:04:24.978393  218555 logs.go:282] 0 containers: []
	W1210 23:04:24.978404  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:24.978412  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:24.978480  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:25.016297  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:25.016332  218555 cri.go:89] found id: ""
	I1210 23:04:25.016340  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:25.016396  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.020319  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:25.020391  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:25.056899  218555 cri.go:89] found id: ""
	I1210 23:04:25.056924  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.056934  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:25.056942  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:25.057004  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:25.101908  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:25.101928  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:25.101938  218555 cri.go:89] found id: ""
	I1210 23:04:25.101946  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:25.102006  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.105872  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.109469  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:25.109543  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:25.145153  218555 cri.go:89] found id: ""
	I1210 23:04:25.145182  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.145191  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:25.145197  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:25.145259  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:25.188965  218555 cri.go:89] found id: ""
	I1210 23:04:25.188987  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.188997  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:25.189016  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:25.189030  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:25.266753  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:25.266783  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:25.301586  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:25.301611  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:25.393253  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:25.393283  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:25.410575  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:25.410604  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:25.448312  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:25.448338  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:25.484181  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:25.484210  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:25.536412  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:25.536443  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:25.574897  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:25.574928  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:25.634417  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:28.134855  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:28.135298  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:28.135374  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:28.135437  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:28.170790  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:28.170813  218555 cri.go:89] found id: ""
	I1210 23:04:28.170823  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:28.170879  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.174912  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:28.174979  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:28.208763  218555 cri.go:89] found id: ""
	I1210 23:04:28.208784  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.208791  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:28.208796  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:28.208842  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:28.243376  218555 cri.go:89] found id: ""
	I1210 23:04:28.243400  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.243409  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:28.243417  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:28.243475  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:28.278280  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:28.278300  218555 cri.go:89] found id: ""
	I1210 23:04:28.278306  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:28.278357  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.282105  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:28.282161  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:28.316679  218555 cri.go:89] found id: ""
	I1210 23:04:28.316702  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.316710  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:28.316716  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:28.316772  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:28.352448  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:28.352468  218555 cri.go:89] found id: ""
	I1210 23:04:28.352477  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:28.352539  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.356325  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:28.356387  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:28.391264  218555 cri.go:89] found id: ""
	I1210 23:04:28.391288  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.391299  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:28.391307  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:28.391373  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:28.426690  218555 cri.go:89] found id: ""
	I1210 23:04:28.426718  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.426730  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:28.426742  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:28.426761  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:28.443934  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:28.443966  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:28.503704  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:28.503728  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:28.503744  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:28.542130  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:28.542161  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:28.621555  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:28.621586  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:28.656874  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:28.656901  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:28.708102  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:28.708131  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:28.746621  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:28.746657  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Dec 10 23:04:22 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:22.224984344Z" level=info msg="Starting container: 965ec0a784421a1f3b8666286d14254943b9c6e521839d958e63eb5bd8f0c71d" id=ab64bfbb-185b-4247-aa7f-d232c273d4f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:04:22 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:22.227853979Z" level=info msg="Started container" PID=2182 containerID=965ec0a784421a1f3b8666286d14254943b9c6e521839d958e63eb5bd8f0c71d description=kube-system/coredns-5dd5756b68-6mzkn/coredns id=ab64bfbb-185b-4247-aa7f-d232c273d4f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1b9641bb650441084bb86039dbc262b17258c3bd344b1a82c6adedf06ea2398
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.472205938Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c62856ec-6dc9-4290-8b38-c2bc4f7918ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.472274283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.477112623Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:589b9d0a25fd5d14298a3b719c3f17434121132c3ba70e7937f19edcdbf0909b UID:eef6ab3b-83eb-4097-a924-8a1b73986571 NetNS:/var/run/netns/d4dc38ec-12a6-4780-a38d-882155c7a5f6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00057e7d0}] Aliases:map[]}"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.477141167Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.487033743Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:589b9d0a25fd5d14298a3b719c3f17434121132c3ba70e7937f19edcdbf0909b UID:eef6ab3b-83eb-4097-a924-8a1b73986571 NetNS:/var/run/netns/d4dc38ec-12a6-4780-a38d-882155c7a5f6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00057e7d0}] Aliases:map[]}"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.487205273Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.488041911Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.488902613Z" level=info msg="Ran pod sandbox 589b9d0a25fd5d14298a3b719c3f17434121132c3ba70e7937f19edcdbf0909b with infra container: default/busybox/POD" id=c62856ec-6dc9-4290-8b38-c2bc4f7918ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.490229185Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=154ac10e-768f-4f04-8bc4-4717dc7f0f7d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.490367292Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=154ac10e-768f-4f04-8bc4-4717dc7f0f7d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.490409262Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=154ac10e-768f-4f04-8bc4-4717dc7f0f7d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.490878485Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=41497f84-02b6-433d-ac64-2377a6dc923b name=/runtime.v1.ImageService/PullImage
	Dec 10 23:04:25 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:25.492342706Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.814839433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=41497f84-02b6-433d-ac64-2377a6dc923b name=/runtime.v1.ImageService/PullImage
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.815792342Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c882c3e9-acbc-434c-9739-b87cb218c422 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.81719466Z" level=info msg="Creating container: default/busybox/busybox" id=61808ec7-5e59-4ea3-88d8-f31ed82533b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.81731465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.82107222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.821457814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.863290048Z" level=info msg="Created container 6605d3bb7c96c1ab5a7ab7bb953c29b59393d70e70553ba7d372afbfdb130951: default/busybox/busybox" id=61808ec7-5e59-4ea3-88d8-f31ed82533b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.863965175Z" level=info msg="Starting container: 6605d3bb7c96c1ab5a7ab7bb953c29b59393d70e70553ba7d372afbfdb130951" id=2f1742ab-8c8e-4578-a859-aa3412de1f9d name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:04:26 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:26.86567708Z" level=info msg="Started container" PID=2258 containerID=6605d3bb7c96c1ab5a7ab7bb953c29b59393d70e70553ba7d372afbfdb130951 description=default/busybox/busybox id=2f1742ab-8c8e-4578-a859-aa3412de1f9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=589b9d0a25fd5d14298a3b719c3f17434121132c3ba70e7937f19edcdbf0909b
	Dec 10 23:04:32 old-k8s-version-280530 crio[775]: time="2025-12-10T23:04:32.248859726Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	6605d3bb7c96c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   589b9d0a25fd5       busybox                                          default
	965ec0a784421       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   f1b9641bb6504       coredns-5dd5756b68-6mzkn                         kube-system
	52abe755348a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   5db8106fcd2c0       storage-provisioner                              kube-system
	32ceec2962b7c       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   1ef2375309957       kindnet-4g5xn                                    kube-system
	096602c2c87dd       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   808630998d262       kube-proxy-nvgl4                                 kube-system
	1333469816b3b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   3cf3980ee9eac       kube-apiserver-old-k8s-version-280530            kube-system
	931bd3acc84e7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   4b3042413ab91       kube-scheduler-old-k8s-version-280530            kube-system
	cdb142c5869a3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   1ac7612d83ec1       kube-controller-manager-old-k8s-version-280530   kube-system
	57128f894092c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   73e42131e6547       etcd-old-k8s-version-280530                      kube-system
	
	
	==> coredns [965ec0a784421a1f3b8666286d14254943b9c6e521839d958e63eb5bd8f0c71d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60249 - 15885 "HINFO IN 4068857735407223863.4389193155695091204. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020289339s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-280530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-280530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=old-k8s-version-280530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_03_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-280530
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:04:27 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:04:27 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:04:27 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:04:27 +0000   Wed, 10 Dec 2025 23:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-280530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                467d6f4a-aed3-4ac0-a7b7-07929c2703cf
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-6mzkn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-280530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-4g5xn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-280530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-old-k8s-version-280530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-nvgl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-280530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node old-k8s-version-280530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node old-k8s-version-280530 event: Registered Node old-k8s-version-280530 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-280530 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [57128f894092c44a2901645cf0443c93ba4c30eea8690e281a7158b4a73bd355] <==
	{"level":"info","ts":"2025-12-10T23:03:51.445182Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T23:03:51.619537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-10T23:03:51.61959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-10T23:03:51.619627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-10T23:03:51.619662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-10T23:03:51.619671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T23:03:51.619684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-10T23:03:51.619694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T23:03:51.62066Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:03:51.621205Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-280530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T23:03:51.621238Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:03:51.621232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:03:51.621412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T23:03:51.621434Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T23:03:51.622255Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:03:51.622387Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:03:51.622445Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:03:51.622636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-10T23:03:51.622932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-10T23:04:06.318535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.248085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-10T23:04:06.318708Z","caller":"traceutil/trace.go:171","msg":"trace[1662242166] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:274; }","duration":"206.444404ms","start":"2025-12-10T23:04:06.112247Z","end":"2025-12-10T23:04:06.318692Z","steps":["trace[1662242166] 'range keys from in-memory index tree'  (duration: 206.115841ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:06.318806Z","caller":"traceutil/trace.go:171","msg":"trace[1318359416] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"107.30192ms","start":"2025-12-10T23:04:06.211487Z","end":"2025-12-10T23:04:06.318789Z","steps":["trace[1318359416] 'process raft request'  (duration: 107.185ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:06.441022Z","caller":"traceutil/trace.go:171","msg":"trace[2125120033] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"118.668512ms","start":"2025-12-10T23:04:06.322333Z","end":"2025-12-10T23:04:06.441001Z","steps":["trace[2125120033] 'process raft request'  (duration: 96.98195ms)","trace[2125120033] 'compare'  (duration: 21.554054ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:04:06.724874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.060371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-12-10T23:04:06.724938Z","caller":"traceutil/trace.go:171","msg":"trace[1608391651] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:1; response_revision:278; }","duration":"163.140256ms","start":"2025-12-10T23:04:06.561783Z","end":"2025-12-10T23:04:06.724923Z","steps":["trace[1608391651] 'range keys from in-memory index tree'  (duration: 162.953221ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:04:33 up 46 min,  0 user,  load average: 2.74, 2.34, 1.60
	Linux old-k8s-version-280530 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32ceec2962b7cdf2b83b2d453bb41f9db77a7ccc780259432ea726b2a518a75e] <==
	I1210 23:04:11.420190       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:04:11.420429       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:04:11.420553       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:04:11.420573       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:04:11.420585       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:04:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:04:11.717564       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:04:11.717592       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:04:11.717603       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:04:11.717784       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:04:12.018373       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:04:12.018395       1 metrics.go:72] Registering metrics
	I1210 23:04:12.018451       1 controller.go:711] "Syncing nftables rules"
	I1210 23:04:21.725749       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:04:21.725806       1 main.go:301] handling current node
	I1210 23:04:31.721499       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:04:31.721534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1333469816b3b49befe0502b315a9e94056cd00047b29ee52bcf7e64ec42bcad] <==
	I1210 23:03:53.109411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 23:03:53.110104       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 23:03:53.111233       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:03:53.111946       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 23:03:53.112040       1 aggregator.go:166] initial CRD sync complete...
	I1210 23:03:53.112057       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 23:03:53.112065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:03:53.112072       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:03:53.129928       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1210 23:03:53.303897       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:03:54.012029       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 23:03:54.015995       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 23:03:54.016011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:03:54.434840       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:03:54.474268       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:03:54.521752       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 23:03:54.527237       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1210 23:03:54.528363       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 23:03:54.532467       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:03:55.062498       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 23:03:55.911200       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 23:03:55.921124       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 23:03:55.930840       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1210 23:04:09.268008       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 23:04:09.426084       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cdb142c5869a31e1af0cd0f1366d601fa37f5fd1ddece5aac5dd9f70e57ff9c1] <==
	I1210 23:04:08.718256       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:04:08.719514       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:04:08.815226       1 shared_informer.go:318] Caches are synced for attach detach
	I1210 23:04:09.138220       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:04:09.213034       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:04:09.213067       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 23:04:09.272048       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1210 23:04:09.446665       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4g5xn"
	I1210 23:04:09.448622       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nvgl4"
	I1210 23:04:09.626081       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rdltz"
	I1210 23:04:09.635278       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6mzkn"
	I1210 23:04:09.644659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="372.691022ms"
	I1210 23:04:09.656785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.049291ms"
	I1210 23:04:09.671628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.782262ms"
	I1210 23:04:09.671775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.252µs"
	I1210 23:04:09.750819       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1210 23:04:09.764055       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rdltz"
	I1210 23:04:09.774897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.168505ms"
	I1210 23:04:09.785955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.003427ms"
	I1210 23:04:09.786078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.363µs"
	I1210 23:04:21.872428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="285.228µs"
	I1210 23:04:21.890112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.582µs"
	I1210 23:04:23.085295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.655976ms"
	I1210 23:04:23.085500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.082µs"
	I1210 23:04:23.621852       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [096602c2c87dd64f8f9ebbb7f7e4914a83d48528909dbe5fbf74c753bf456179] <==
	I1210 23:04:09.869139       1 server_others.go:69] "Using iptables proxy"
	I1210 23:04:09.879974       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 23:04:09.899401       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:04:09.903537       1 server_others.go:152] "Using iptables Proxier"
	I1210 23:04:09.903584       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 23:04:09.903593       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 23:04:09.903633       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 23:04:09.903914       1 server.go:846] "Version info" version="v1.28.0"
	I1210 23:04:09.903927       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:04:09.904629       1 config.go:97] "Starting endpoint slice config controller"
	I1210 23:04:09.904673       1 config.go:315] "Starting node config controller"
	I1210 23:04:09.904688       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 23:04:09.904693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 23:04:09.904693       1 config.go:188] "Starting service config controller"
	I1210 23:04:09.904707       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 23:04:10.005804       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 23:04:10.005829       1 shared_informer.go:318] Caches are synced for service config
	I1210 23:04:10.005812       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [931bd3acc84e710fc2f7145eaf42756764f02ac845f04957640667c3a1cbf466] <==
	W1210 23:03:53.071904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 23:03:53.072041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1210 23:03:53.071954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 23:03:53.072060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1210 23:03:53.072074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1210 23:03:53.072093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 23:03:53.072094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1210 23:03:53.072101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 23:03:53.072106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 23:03:53.072121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1210 23:03:53.072127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 23:03:53.072146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1210 23:03:53.930182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 23:03:53.930217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1210 23:03:53.945806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 23:03:53.945837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1210 23:03:54.021756       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 23:03:54.021794       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:03:54.043062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 23:03:54.043093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1210 23:03:54.069812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 23:03:54.069845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1210 23:03:54.156228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 23:03:54.156259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1210 23:03:56.069590       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 23:04:08 old-k8s-version-280530 kubelet[1416]: I1210 23:04:08.561348    1416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.452579    1416 topology_manager.go:215] "Topology Admit Handler" podUID="da5d63e5-1d59-4260-a616-bb1e532d73ef" podNamespace="kube-system" podName="kindnet-4g5xn"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.454627    1416 topology_manager.go:215] "Topology Admit Handler" podUID="d9f46688-73a7-4697-a4d4-b65d4e225487" podNamespace="kube-system" podName="kube-proxy-nvgl4"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.461577    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da5d63e5-1d59-4260-a616-bb1e532d73ef-cni-cfg\") pod \"kindnet-4g5xn\" (UID: \"da5d63e5-1d59-4260-a616-bb1e532d73ef\") " pod="kube-system/kindnet-4g5xn"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.461653    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da5d63e5-1d59-4260-a616-bb1e532d73ef-xtables-lock\") pod \"kindnet-4g5xn\" (UID: \"da5d63e5-1d59-4260-a616-bb1e532d73ef\") " pod="kube-system/kindnet-4g5xn"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.461696    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w592b\" (UniqueName: \"kubernetes.io/projected/da5d63e5-1d59-4260-a616-bb1e532d73ef-kube-api-access-w592b\") pod \"kindnet-4g5xn\" (UID: \"da5d63e5-1d59-4260-a616-bb1e532d73ef\") " pod="kube-system/kindnet-4g5xn"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.461723    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da5d63e5-1d59-4260-a616-bb1e532d73ef-lib-modules\") pod \"kindnet-4g5xn\" (UID: \"da5d63e5-1d59-4260-a616-bb1e532d73ef\") " pod="kube-system/kindnet-4g5xn"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.563004    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4lv\" (UniqueName: \"kubernetes.io/projected/d9f46688-73a7-4697-a4d4-b65d4e225487-kube-api-access-xz4lv\") pod \"kube-proxy-nvgl4\" (UID: \"d9f46688-73a7-4697-a4d4-b65d4e225487\") " pod="kube-system/kube-proxy-nvgl4"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.563512    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9f46688-73a7-4697-a4d4-b65d4e225487-kube-proxy\") pod \"kube-proxy-nvgl4\" (UID: \"d9f46688-73a7-4697-a4d4-b65d4e225487\") " pod="kube-system/kube-proxy-nvgl4"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.563570    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f46688-73a7-4697-a4d4-b65d4e225487-xtables-lock\") pod \"kube-proxy-nvgl4\" (UID: \"d9f46688-73a7-4697-a4d4-b65d4e225487\") " pod="kube-system/kube-proxy-nvgl4"
	Dec 10 23:04:09 old-k8s-version-280530 kubelet[1416]: I1210 23:04:09.563611    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f46688-73a7-4697-a4d4-b65d4e225487-lib-modules\") pod \"kube-proxy-nvgl4\" (UID: \"d9f46688-73a7-4697-a4d4-b65d4e225487\") " pod="kube-system/kube-proxy-nvgl4"
	Dec 10 23:04:12 old-k8s-version-280530 kubelet[1416]: I1210 23:04:12.138657    1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4g5xn" podStartSLOduration=1.709329168 podCreationTimestamp="2025-12-10 23:04:09 +0000 UTC" firstStartedPulling="2025-12-10 23:04:09.771113074 +0000 UTC m=+13.887503813" lastFinishedPulling="2025-12-10 23:04:11.200363363 +0000 UTC m=+15.316754107" observedRunningTime="2025-12-10 23:04:12.138331385 +0000 UTC m=+16.254722132" watchObservedRunningTime="2025-12-10 23:04:12.138579462 +0000 UTC m=+16.254970208"
	Dec 10 23:04:12 old-k8s-version-280530 kubelet[1416]: I1210 23:04:12.138866    1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nvgl4" podStartSLOduration=3.138830216 podCreationTimestamp="2025-12-10 23:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:10.035221209 +0000 UTC m=+14.151611958" watchObservedRunningTime="2025-12-10 23:04:12.138830216 +0000 UTC m=+16.255220964"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.845377    1416 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.872363    1416 topology_manager.go:215] "Topology Admit Handler" podUID="e58a1fae-28a7-4ee0-9b47-d218809cf39b" podNamespace="kube-system" podName="coredns-5dd5756b68-6mzkn"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.876275    1416 topology_manager.go:215] "Topology Admit Handler" podUID="32e8e488-81a6-4639-bc89-f5107ea52fdd" podNamespace="kube-system" podName="storage-provisioner"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.959302    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkswd\" (UniqueName: \"kubernetes.io/projected/32e8e488-81a6-4639-bc89-f5107ea52fdd-kube-api-access-gkswd\") pod \"storage-provisioner\" (UID: \"32e8e488-81a6-4639-bc89-f5107ea52fdd\") " pod="kube-system/storage-provisioner"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.959374    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/32e8e488-81a6-4639-bc89-f5107ea52fdd-tmp\") pod \"storage-provisioner\" (UID: \"32e8e488-81a6-4639-bc89-f5107ea52fdd\") " pod="kube-system/storage-provisioner"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.959412    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e58a1fae-28a7-4ee0-9b47-d218809cf39b-config-volume\") pod \"coredns-5dd5756b68-6mzkn\" (UID: \"e58a1fae-28a7-4ee0-9b47-d218809cf39b\") " pod="kube-system/coredns-5dd5756b68-6mzkn"
	Dec 10 23:04:21 old-k8s-version-280530 kubelet[1416]: I1210 23:04:21.959496    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxgg\" (UniqueName: \"kubernetes.io/projected/e58a1fae-28a7-4ee0-9b47-d218809cf39b-kube-api-access-bcxgg\") pod \"coredns-5dd5756b68-6mzkn\" (UID: \"e58a1fae-28a7-4ee0-9b47-d218809cf39b\") " pod="kube-system/coredns-5dd5756b68-6mzkn"
	Dec 10 23:04:23 old-k8s-version-280530 kubelet[1416]: I1210 23:04:23.077565    1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6mzkn" podStartSLOduration=14.077495858 podCreationTimestamp="2025-12-10 23:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:23.077454401 +0000 UTC m=+27.193845149" watchObservedRunningTime="2025-12-10 23:04:23.077495858 +0000 UTC m=+27.193886605"
	Dec 10 23:04:23 old-k8s-version-280530 kubelet[1416]: I1210 23:04:23.077713    1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.077681013 podCreationTimestamp="2025-12-10 23:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:23.066741482 +0000 UTC m=+27.183132240" watchObservedRunningTime="2025-12-10 23:04:23.077681013 +0000 UTC m=+27.194071760"
	Dec 10 23:04:25 old-k8s-version-280530 kubelet[1416]: I1210 23:04:25.170372    1416 topology_manager.go:215] "Topology Admit Handler" podUID="eef6ab3b-83eb-4097-a924-8a1b73986571" podNamespace="default" podName="busybox"
	Dec 10 23:04:25 old-k8s-version-280530 kubelet[1416]: I1210 23:04:25.178461    1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgr6\" (UniqueName: \"kubernetes.io/projected/eef6ab3b-83eb-4097-a924-8a1b73986571-kube-api-access-gvgr6\") pod \"busybox\" (UID: \"eef6ab3b-83eb-4097-a924-8a1b73986571\") " pod="default/busybox"
	Dec 10 23:04:27 old-k8s-version-280530 kubelet[1416]: I1210 23:04:27.080344    1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.75564507 podCreationTimestamp="2025-12-10 23:04:25 +0000 UTC" firstStartedPulling="2025-12-10 23:04:25.490568021 +0000 UTC m=+29.606958762" lastFinishedPulling="2025-12-10 23:04:26.815198868 +0000 UTC m=+30.931589610" observedRunningTime="2025-12-10 23:04:27.079744119 +0000 UTC m=+31.196134878" watchObservedRunningTime="2025-12-10 23:04:27.080275918 +0000 UTC m=+31.196666664"
	
	
	==> storage-provisioner [52abe755348a37addb88b539f13ad127b780ffbd68998b0395db8cfe96ea05f4] <==
	I1210 23:04:22.237029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:04:22.253825       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:04:22.253946       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 23:04:22.262521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:04:22.262804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_7b8599c7-2323-46a6-8c68-6f0503e4b7f2!
	I1210 23:04:22.262768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54431db1-ea80-4659-b536-d1e109546d8c", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-280530_7b8599c7-2323-46a6-8c68-6f0503e4b7f2 became leader
	I1210 23:04:22.363138       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_7b8599c7-2323-46a6-8c68-6f0503e4b7f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-280530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.616095ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:04:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-092439 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-092439 describe deploy/metrics-server -n kube-system: exit status 1 (63.627106ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-092439 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-092439
helpers_test.go:244: (dbg) docker inspect no-preload-092439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	        "Created": "2025-12-10T23:03:49.807359238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:03:49.842043252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hostname",
	        "HostsPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hosts",
	        "LogPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213-json.log",
	        "Name": "/no-preload-092439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-092439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-092439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	                "LowerDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-092439",
	                "Source": "/var/lib/docker/volumes/no-preload-092439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-092439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-092439",
	                "name.minikube.sigs.k8s.io": "no-preload-092439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "aa197e705bf2488d3f3662c85fd5037b6acbfca8eecbb70257f783ae059b956e",
	            "SandboxKey": "/var/run/docker/netns/aa197e705bf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-092439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9adf045f08f3157cc4b3a22d4d1229edfd6c1e8d22978b4ef7f6f7a0d83df92c",
	                    "EndpointID": "fa45ed71120e1c57061eb363df7dc8e5da9d12006d0438a4e6e8c30d77603159",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:d5:38:f7:5d:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-092439",
	                        "08ed46fd1dff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-092439 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-092439 logs -n 25: (1.066928883s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-177285 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo containerd config dump                                                                                                                                                                                                  │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ ssh     │ -p cilium-177285 sudo crio config                                                                                                                                                                                                             │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p cilium-177285                                                                                                                                                                                                                              │ cilium-177285             │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p force-systemd-flag-725815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ force-systemd-flag-725815 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ delete  │ -p force-systemd-flag-725815                                                                                                                                                                                                                  │ force-systemd-flag-725815 │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530    │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ stop    │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439         │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-280530    │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-280530    │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-092439         │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:03:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:03:48.947755  257827 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:03:48.947874  257827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:48.947885  257827 out.go:374] Setting ErrFile to fd 2...
	I1210 23:03:48.947890  257827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:48.948124  257827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:03:48.948635  257827 out.go:368] Setting JSON to false
	I1210 23:03:48.949740  257827 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2771,"bootTime":1765405058,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:03:48.949803  257827 start.go:143] virtualization: kvm guest
	I1210 23:03:48.951953  257827 out.go:179] * [no-preload-092439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:03:48.953190  257827 notify.go:221] Checking for updates...
	I1210 23:03:48.953194  257827 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:03:48.954508  257827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:03:48.955846  257827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:03:48.957166  257827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:03:48.958377  257827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:03:48.959611  257827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:03:48.961304  257827 config.go:182] Loaded profile config "kubernetes-upgrade-000011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:03:48.961449  257827 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:03:48.961592  257827 config.go:182] Loaded profile config "stopped-upgrade-679204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:03:48.961700  257827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:03:48.984986  257827 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:03:48.985087  257827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:03:49.042813  257827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:03:49.033745424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:03:49.042911  257827 docker.go:319] overlay module found
	I1210 23:03:49.045119  257827 out.go:179] * Using the docker driver based on user configuration
	I1210 23:03:49.046303  257827 start.go:309] selected driver: docker
	I1210 23:03:49.046317  257827 start.go:927] validating driver "docker" against <nil>
	I1210 23:03:49.046331  257827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:03:49.046954  257827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:03:49.104215  257827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:03:49.094252924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:03:49.104446  257827 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:03:49.104755  257827 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:03:49.106507  257827 out.go:179] * Using Docker driver with root privileges
	I1210 23:03:49.107612  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:03:49.107712  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:03:49.107726  257827 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:03:49.107820  257827 start.go:353] cluster config:
	{Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:03:49.109119  257827 out.go:179] * Starting "no-preload-092439" primary control-plane node in "no-preload-092439" cluster
	I1210 23:03:49.110396  257827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:03:49.111539  257827 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:03:49.112594  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:03:49.112702  257827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:03:49.112714  257827 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json ...
	I1210 23:03:49.112744  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json: {Name:mk382929cc2c549a45ba9315a93e1649c33fdf76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:03:49.112889  257827 cache.go:107] acquiring lock: {Name:mk28fded00b2eb43f464ddd8b45bc4e4ec08bb3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112888  257827 cache.go:107] acquiring lock: {Name:mka56d5112841f21b3e7353ebb0e43779ce575dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112927  257827 cache.go:107] acquiring lock: {Name:mk8a6aa013168b15dbefc5af313f4b71504c3f5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.112939  257827 cache.go:107] acquiring lock: {Name:mkdab71c46745e396cd56cf0c69b79eb6e9c81f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113009  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 23:03:49.113011  257827 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:49.113012  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 23:03:49.113019  257827 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.105µs
	I1210 23:03:49.113024  257827 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 95.835µs
	I1210 23:03:49.113034  257827 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 23:03:49.113034  257827 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 23:03:49.113022  257827 cache.go:107] acquiring lock: {Name:mkd2ed8297bc2ef6e52c45d6d09784d2954483e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113052  257827 cache.go:107] acquiring lock: {Name:mk4619f034a8ff7e5e9f09c156f5dc84cc50586a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113055  257827 cache.go:107] acquiring lock: {Name:mkfa5ba86b1b79d34dabf8df77d646828c1c0e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113044  257827 cache.go:107] acquiring lock: {Name:mkaebb267ce65474b38251f1ac7bb210058a59c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.113094  257827 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:49.113140  257827 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:49.113004  257827 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:49.113218  257827 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:49.113296  257827 cache.go:115] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 23:03:49.113307  257827 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 312.729µs
	I1210 23:03:49.113332  257827 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 23:03:49.114248  257827 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:49.114271  257827 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:49.114310  257827 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:49.114400  257827 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:49.114400  257827 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:49.135248  257827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:03:49.135269  257827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:03:49.135283  257827 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:03:49.135308  257827 start.go:360] acquireMachinesLock for no-preload-092439: {Name:mk2bc719b9b9863bdb78b604a641e66b37f2b26f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:03:49.135393  257827 start.go:364] duration metric: took 71.683µs to acquireMachinesLock for "no-preload-092439"
	I1210 23:03:49.135416  257827 start.go:93] Provisioning new machine with config: &{Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:03:49.135508  257827 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:03:45.817716  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:03:46.400713  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:46.401080  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:46.401136  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:46.401190  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:46.439436  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:46.439457  218555 cri.go:89] found id: ""
	I1210 23:03:46.439471  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:46.439524  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.443359  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:46.443410  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:46.479759  218555 cri.go:89] found id: ""
	I1210 23:03:46.479781  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.479792  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:46.479800  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:46.479854  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:46.519178  218555 cri.go:89] found id: ""
	I1210 23:03:46.519208  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.519219  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:46.519227  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:46.519282  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:46.556615  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:46.556655  218555 cri.go:89] found id: ""
	I1210 23:03:46.556666  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:46.556730  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.560620  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:46.560707  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:46.598456  218555 cri.go:89] found id: ""
	I1210 23:03:46.598479  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.598489  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:46.598496  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:46.598560  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:46.636028  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:46.636052  218555 cri.go:89] found id: ""
	I1210 23:03:46.636061  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:46.636120  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:46.639950  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:46.640017  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:46.677011  218555 cri.go:89] found id: ""
	I1210 23:03:46.677038  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.677049  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:46.677058  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:46.677116  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:46.713969  218555 cri.go:89] found id: ""
	I1210 23:03:46.713987  218555 logs.go:282] 0 containers: []
	W1210 23:03:46.713994  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:46.714002  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:46.714014  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:46.783136  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:46.783165  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:46.783188  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:46.825000  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:46.825033  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:46.909356  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:46.909381  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:46.951499  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:46.951577  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:47.015396  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:47.015433  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:47.062000  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:47.062029  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:47.160216  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:47.160244  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:49.679712  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:49.680175  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:49.680241  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:49.680306  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:49.717917  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:49.717940  218555 cri.go:89] found id: ""
	I1210 23:03:49.717950  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:49.718007  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:49.722105  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:49.722175  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:49.770493  218555 cri.go:89] found id: ""
	I1210 23:03:49.770519  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.770530  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:49.770537  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:49.770598  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:49.810810  218555 cri.go:89] found id: ""
	I1210 23:03:49.810835  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.810845  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:49.810852  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:49.810905  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:49.849431  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:49.849456  218555 cri.go:89] found id: ""
	I1210 23:03:49.849466  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:49.849524  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:47.451770  252278 out.go:252]   - Generating certificates and keys ...
	I1210 23:03:47.451848  252278 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:03:47.451927  252278 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:03:47.699264  252278 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:03:48.055765  252278 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:03:48.157671  252278 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:03:48.275034  252278 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:03:48.350884  252278 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:03:48.351036  252278 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-280530] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 23:03:48.516123  252278 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:03:48.516248  252278 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-280530] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 23:03:48.571794  252278 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:03:48.862494  252278 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:03:49.109571  252278 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:03:49.109730  252278 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:03:49.408901  252278 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:03:49.729196  252278 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:03:49.866839  252278 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:03:50.004929  252278 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:03:50.005745  252278 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:03:50.010861  252278 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:03:50.012328  252278 out.go:252]   - Booting up control plane ...
	I1210 23:03:50.012474  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:03:50.012603  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:03:50.013471  252278 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:03:50.030064  252278 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:03:50.031152  252278 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:03:50.031221  252278 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:03:50.141703  252278 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 23:03:49.138302  257827 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:03:49.138547  257827 start.go:159] libmachine.API.Create for "no-preload-092439" (driver="docker")
	I1210 23:03:49.138609  257827 client.go:173] LocalClient.Create starting
	I1210 23:03:49.138686  257827 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:03:49.138729  257827 main.go:143] libmachine: Decoding PEM data...
	I1210 23:03:49.138757  257827 main.go:143] libmachine: Parsing certificate...
	I1210 23:03:49.138816  257827 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:03:49.138843  257827 main.go:143] libmachine: Decoding PEM data...
	I1210 23:03:49.138861  257827 main.go:143] libmachine: Parsing certificate...
	I1210 23:03:49.139221  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:03:49.157860  257827 cli_runner.go:211] docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:03:49.157927  257827 network_create.go:284] running [docker network inspect no-preload-092439] to gather additional debugging logs...
	I1210 23:03:49.157951  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439
	W1210 23:03:49.177626  257827 cli_runner.go:211] docker network inspect no-preload-092439 returned with exit code 1
	I1210 23:03:49.177667  257827 network_create.go:287] error running [docker network inspect no-preload-092439]: docker network inspect no-preload-092439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-092439 not found
	I1210 23:03:49.177682  257827 network_create.go:289] output of [docker network inspect no-preload-092439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-092439 not found
	
	** /stderr **
	I1210 23:03:49.177761  257827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:03:49.197092  257827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:03:49.197867  257827 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:03:49.198436  257827 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:03:49.199148  257827 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ba4ba5106fb6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:5d:c2:fb:6c:d4} reservation:<nil>}
	I1210 23:03:49.199550  257827 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a08a4bae7c44 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:f8:26:0e:4e:af} reservation:<nil>}
	I1210 23:03:49.200360  257827 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0ac00}
	I1210 23:03:49.200389  257827 network_create.go:124] attempt to create docker network no-preload-092439 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:03:49.200444  257827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-092439 no-preload-092439
	I1210 23:03:49.251252  257827 network_create.go:108] docker network no-preload-092439 192.168.94.0/24 created
	I1210 23:03:49.251287  257827 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-092439" container
	I1210 23:03:49.251352  257827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:03:49.252788  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:49.260006  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:49.261150  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 23:03:49.263779  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:49.271918  257827 cli_runner.go:164] Run: docker volume create no-preload-092439 --label name.minikube.sigs.k8s.io=no-preload-092439 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:03:49.291289  257827 cache.go:162] opening:  /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 23:03:49.291288  257827 oci.go:103] Successfully created a docker volume no-preload-092439
	I1210 23:03:49.291390  257827 cli_runner.go:164] Run: docker run --rm --name no-preload-092439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-092439 --entrypoint /usr/bin/test -v no-preload-092439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:03:49.661171  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1210 23:03:49.661196  257827 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 548.321549ms
	I1210 23:03:49.661208  257827 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1210 23:03:49.729719  257827 oci.go:107] Successfully prepared a docker volume no-preload-092439
	I1210 23:03:49.729763  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1210 23:03:49.729845  257827 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:03:49.729879  257827 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:03:49.729920  257827 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:03:49.788312  257827 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-092439 --name no-preload-092439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-092439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-092439 --network no-preload-092439 --ip 192.168.94.2 --volume no-preload-092439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:03:50.105072  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Running}}
	I1210 23:03:50.127365  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.149395  257827 cli_runner.go:164] Run: docker exec no-preload-092439 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:03:50.215130  257827 oci.go:144] the created container "no-preload-092439" has a running status.
	I1210 23:03:50.215164  257827 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa...
	I1210 23:03:50.253901  257827 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:03:50.301388  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.335525  257827 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:03:50.335548  257827 kic_runner.go:114] Args: [docker exec --privileged no-preload-092439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:03:50.420583  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:03:50.453971  257827 machine.go:94] provisionDockerMachine start ...
	I1210 23:03:50.454071  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:50.487214  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:50.487570  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:50.487590  257827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:03:50.488367  257827 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48786->127.0.0.1:33064: read: connection reset by peer
	I1210 23:03:50.489381  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1210 23:03:50.489487  257827 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.376433556s
	I1210 23:03:50.489515  257827 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1210 23:03:50.581691  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1210 23:03:50.581724  257827 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.468672941s
	I1210 23:03:50.581741  257827 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1210 23:03:50.600525  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1210 23:03:50.600553  257827 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.487532203s
	I1210 23:03:50.600566  257827 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1210 23:03:50.601477  257827 cache.go:157] /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 23:03:50.601498  257827 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.488575462s
	I1210 23:03:50.601511  257827 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 23:03:50.601527  257827 cache.go:87] Successfully saved all images to host disk.
	I1210 23:03:53.626519  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-092439
	
	I1210 23:03:53.626552  257827 ubuntu.go:182] provisioning hostname "no-preload-092439"
	I1210 23:03:53.626633  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:53.646330  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:53.646636  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:53.646668  257827 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-092439 && echo "no-preload-092439" | sudo tee /etc/hostname
	I1210 23:03:53.791790  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-092439
	
	I1210 23:03:53.791871  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:53.810001  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:53.810284  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:53.810302  257827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-092439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-092439/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-092439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:03:53.944369  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:03:53.944411  257827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:03:53.944463  257827 ubuntu.go:190] setting up certificates
	I1210 23:03:53.944476  257827 provision.go:84] configureAuth start
	I1210 23:03:53.944555  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:50.819468  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 23:03:50.819524  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:50.819576  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:50.845805  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:03:50.845823  215904 cri.go:89] found id: "b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:03:50.845826  215904 cri.go:89] found id: ""
	I1210 23:03:50.845833  215904 logs.go:282] 2 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]
	I1210 23:03:50.845878  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.849783  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.853373  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:50.853434  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:50.878001  215904 cri.go:89] found id: ""
	I1210 23:03:50.878026  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.878036  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:03:50.878047  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:50.878096  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:50.904840  215904 cri.go:89] found id: ""
	I1210 23:03:50.904865  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.904877  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:03:50.904884  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:50.904946  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:50.930819  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:03:50.930842  215904 cri.go:89] found id: ""
	I1210 23:03:50.930852  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:03:50.930914  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.934677  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:50.934740  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:50.959479  215904 cri.go:89] found id: ""
	I1210 23:03:50.959504  215904 logs.go:282] 0 containers: []
	W1210 23:03:50.959514  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:50.959522  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:50.959580  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:50.987741  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:03:50.987759  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:03:50.987763  215904 cri.go:89] found id: ""
	I1210 23:03:50.987769  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:03:50.987816  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.991709  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:03:50.995253  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:50.995321  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:51.020889  215904 cri.go:89] found id: ""
	I1210 23:03:51.020912  215904 logs.go:282] 0 containers: []
	W1210 23:03:51.020923  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:51.020931  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:51.020989  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:51.046173  215904 cri.go:89] found id: ""
	I1210 23:03:51.046198  215904 logs.go:282] 0 containers: []
	W1210 23:03:51.046207  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:51.046225  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:51.046238  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:51.091807  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:51.091838  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:51.180080  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:51.180115  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:51.195098  215904 logs.go:123] Gathering logs for kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2] ...
	I1210 23:03:51.195130  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:03:51.225398  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:03:51.225429  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:03:51.252108  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:03:51.252139  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:51.282141  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:51.282176  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 23:03:49.854289  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:49.854369  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:49.891453  218555 cri.go:89] found id: ""
	I1210 23:03:49.891479  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.891489  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:49.891497  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:49.891555  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:49.931613  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:49.931639  218555 cri.go:89] found id: ""
	I1210 23:03:49.931665  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:49.931729  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:49.936618  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:49.936706  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:49.982709  218555 cri.go:89] found id: ""
	I1210 23:03:49.982734  218555 logs.go:282] 0 containers: []
	W1210 23:03:49.982744  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:49.982752  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:49.982817  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:50.023164  218555 cri.go:89] found id: ""
	I1210 23:03:50.023192  218555 logs.go:282] 0 containers: []
	W1210 23:03:50.023202  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:50.023213  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:50.023228  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:50.080763  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:50.080830  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:50.125804  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:50.125830  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:50.236468  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:50.236499  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:50.259638  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:50.260348  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:50.357931  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:50.358061  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:50.358085  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:50.425395  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:50.425427  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:50.536606  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:50.536639  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.085173  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:53.085633  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:53.085726  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:53.085793  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:53.126325  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:53.126350  218555 cri.go:89] found id: ""
	I1210 23:03:53.126369  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:53.126482  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.131634  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:53.131722  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:53.175441  218555 cri.go:89] found id: ""
	I1210 23:03:53.175467  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.175479  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:53.175486  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:53.175546  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:53.217100  218555 cri.go:89] found id: ""
	I1210 23:03:53.217128  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.217139  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:53.217148  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:53.217209  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:53.251001  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:53.251023  218555 cri.go:89] found id: ""
	I1210 23:03:53.251034  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:53.251097  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.254791  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:53.254856  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:53.290240  218555 cri.go:89] found id: ""
	I1210 23:03:53.290266  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.290274  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:53.290281  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:53.290337  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:53.325033  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.325051  218555 cri.go:89] found id: ""
	I1210 23:03:53.325059  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:53.325124  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:53.328852  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:53.328918  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:53.362349  218555 cri.go:89] found id: ""
	I1210 23:03:53.362375  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.362387  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:53.362395  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:53.362456  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:53.396071  218555 cri.go:89] found id: ""
	I1210 23:03:53.396098  218555 logs.go:282] 0 containers: []
	W1210 23:03:53.396109  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:53.396122  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:53.396140  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:53.412380  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:53.412414  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:53.470614  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:53.470637  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:53.470685  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:53.507634  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:53.507669  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:53.583598  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:53.583626  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:53.617850  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:53.617876  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:53.665128  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:53.665155  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:53.704910  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:53.704935  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:54.644473  252278 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502932 seconds
	I1210 23:03:54.644689  252278 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:03:54.658221  252278 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:03:55.181120  252278 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:03:55.181310  252278 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-280530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:03:55.692305  252278 kubeadm.go:319] [bootstrap-token] Using token: qidm8r.gttynu6ydc93qzk4
	I1210 23:03:53.963220  257827 provision.go:143] copyHostCerts
	I1210 23:03:53.963291  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:03:53.963303  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:03:53.963371  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:03:53.963470  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:03:53.963484  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:03:53.963515  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:03:53.963572  257827 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:03:53.963582  257827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:03:53.963604  257827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:03:53.963670  257827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.no-preload-092439 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-092439]
	I1210 23:03:54.062138  257827 provision.go:177] copyRemoteCerts
	I1210 23:03:54.062221  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:03:54.062275  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.083770  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.185672  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:03:54.207215  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:03:54.229589  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:03:54.251129  257827 provision.go:87] duration metric: took 306.636463ms to configureAuth
	I1210 23:03:54.251156  257827 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:03:54.251360  257827 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:03:54.251497  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.273444  257827 main.go:143] libmachine: Using SSH client type: native
	I1210 23:03:54.273732  257827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1210 23:03:54.273764  257827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:03:54.571798  257827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:03:54.571821  257827 machine.go:97] duration metric: took 4.117823734s to provisionDockerMachine
	I1210 23:03:54.571833  257827 client.go:176] duration metric: took 5.433213469s to LocalClient.Create
	I1210 23:03:54.571858  257827 start.go:167] duration metric: took 5.433311706s to libmachine.API.Create "no-preload-092439"
	I1210 23:03:54.571868  257827 start.go:293] postStartSetup for "no-preload-092439" (driver="docker")
	I1210 23:03:54.571888  257827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:03:54.571974  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:03:54.572024  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.591589  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.690878  257827 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:03:54.694424  257827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:03:54.694455  257827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:03:54.694469  257827 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:03:54.694526  257827 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:03:54.694607  257827 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:03:54.694725  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:03:54.702953  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:03:54.723820  257827 start.go:296] duration metric: took 151.91875ms for postStartSetup
	I1210 23:03:54.724196  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:54.742824  257827 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/config.json ...
	I1210 23:03:54.743116  257827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:03:54.743157  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.760997  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.855968  257827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:03:54.860665  257827 start.go:128] duration metric: took 5.725126155s to createHost
	I1210 23:03:54.860691  257827 start.go:83] releasing machines lock for "no-preload-092439", held for 5.72528499s
	I1210 23:03:54.860751  257827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-092439
	I1210 23:03:54.879053  257827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:03:54.879087  257827 ssh_runner.go:195] Run: cat /version.json
	I1210 23:03:54.879133  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.879137  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:03:54.897742  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:54.898734  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:03:55.047075  257827 ssh_runner.go:195] Run: systemctl --version
	I1210 23:03:55.054209  257827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:03:55.093435  257827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:03:55.098762  257827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:03:55.098836  257827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:03:55.126907  257827 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:03:55.126927  257827 start.go:496] detecting cgroup driver to use...
	I1210 23:03:55.126960  257827 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:03:55.127009  257827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:03:55.143011  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:03:55.155204  257827 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:03:55.155251  257827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:03:55.172053  257827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:03:55.191805  257827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:03:55.273885  257827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:03:55.361951  257827 docker.go:234] disabling docker service ...
	I1210 23:03:55.362016  257827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:03:55.380406  257827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:03:55.393346  257827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:03:55.479164  257827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:03:55.562681  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:03:55.575631  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:03:55.589605  257827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:03:55.589697  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.600244  257827 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:03:55.600293  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.608914  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.617395  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.626015  257827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:03:55.633959  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.642341  257827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.655427  257827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:03:55.663822  257827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:03:55.671325  257827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:03:55.678767  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:03:55.770099  257827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:03:55.919023  257827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:03:55.919094  257827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:03:55.923957  257827 start.go:564] Will wait 60s for crictl version
	I1210 23:03:55.924011  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:55.928256  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:03:55.956121  257827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:03:55.956217  257827 ssh_runner.go:195] Run: crio --version
	I1210 23:03:55.996761  257827 ssh_runner.go:195] Run: crio --version
	I1210 23:03:56.027275  257827 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 23:03:55.693945  252278 out.go:252]   - Configuring RBAC rules ...
	I1210 23:03:55.694092  252278 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:03:55.701582  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:03:55.714571  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:03:55.716335  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:03:55.719381  252278 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:03:55.722732  252278 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:03:55.733519  252278 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:03:55.922348  252278 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:03:56.106768  252278 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:03:56.107629  252278 kubeadm.go:319] 
	I1210 23:03:56.107717  252278 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:03:56.107727  252278 kubeadm.go:319] 
	I1210 23:03:56.107837  252278 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:03:56.107860  252278 kubeadm.go:319] 
	I1210 23:03:56.107904  252278 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:03:56.107967  252278 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:03:56.108016  252278 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:03:56.108022  252278 kubeadm.go:319] 
	I1210 23:03:56.108095  252278 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:03:56.108104  252278 kubeadm.go:319] 
	I1210 23:03:56.108168  252278 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:03:56.108175  252278 kubeadm.go:319] 
	I1210 23:03:56.108237  252278 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:03:56.108308  252278 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:03:56.108363  252278 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:03:56.108373  252278 kubeadm.go:319] 
	I1210 23:03:56.108437  252278 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:03:56.108499  252278 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:03:56.108505  252278 kubeadm.go:319] 
	I1210 23:03:56.108624  252278 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qidm8r.gttynu6ydc93qzk4 \
	I1210 23:03:56.108814  252278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:03:56.108838  252278 kubeadm.go:319] 	--control-plane 
	I1210 23:03:56.108841  252278 kubeadm.go:319] 
	I1210 23:03:56.108914  252278 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:03:56.108920  252278 kubeadm.go:319] 
	I1210 23:03:56.109017  252278 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qidm8r.gttynu6ydc93qzk4 \
	I1210 23:03:56.109153  252278 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:03:56.111527  252278 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:03:56.111703  252278 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:03:56.111734  252278 cni.go:84] Creating CNI manager for ""
	I1210 23:03:56.111744  252278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:03:56.114187  252278 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:03:56.028615  257827 cli_runner.go:164] Run: docker network inspect no-preload-092439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:03:56.045863  257827 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 23:03:56.050005  257827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:03:56.060478  257827 kubeadm.go:884] updating cluster {Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:03:56.060590  257827 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:03:56.060632  257827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:03:56.089941  257827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 23:03:56.089968  257827 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 23:03:56.090052  257827 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.090069  257827 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.090106  257827 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.090135  257827 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.090174  257827 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.090051  257827 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.090256  257827 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.090110  257827 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.091544  257827 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.091621  257827 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.091688  257827 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.091723  257827 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.091544  257827 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.091547  257827 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.091547  257827 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.091893  257827 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.216329  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.219113  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.219421  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.224438  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.233057  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.234505  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.252117  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 23:03:56.280815  257827 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 23:03:56.280864  257827 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.280909  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.280988  257827 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1210 23:03:56.281025  257827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.281109  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.281187  257827 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1210 23:03:56.281233  257827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.281349  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.292861  257827 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1210 23:03:56.292905  257827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.292957  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.296877  257827 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1210 23:03:56.296916  257827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.296962  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.296881  257827 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1210 23:03:56.297013  257827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.297065  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.301523  257827 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 23:03:56.301559  257827 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 23:03:56.301582  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.301598  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.301599  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.301684  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.301708  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.301732  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.301784  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.341885  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.341917  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.341993  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.347614  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.347682  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.347624  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.347727  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.386473  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.386485  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 23:03:56.386539  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 23:03:56.397912  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 23:03:56.398026  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 23:03:56.400527  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 23:03:56.406682  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 23:03:56.426978  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 23:03:56.431534  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 23:03:56.431710  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:03:56.433476  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:56.433599  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:56.442116  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:56.442181  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 23:03:56.442220  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:03:56.442262  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:03:56.449001  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:56.449101  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:56.457376  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 23:03:56.457456  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:03:56.465332  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 23:03:56.465433  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:56.465445  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 23:03:56.465476  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 23:03:56.465485  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465512  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1210 23:03:56.465535  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465565  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465572  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1210 23:03:56.465581  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1210 23:03:56.465597  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1210 23:03:56.465612  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1210 23:03:56.465665  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 23:03:56.465691  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1210 23:03:56.486595  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 23:03:56.486634  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 23:03:56.622393  257827 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:56.622474  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 23:03:57.108726  257827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:57.129069  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 23:03:57.129114  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:57.129169  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 23:03:57.155670  257827 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 23:03:57.155718  257827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:57.155767  257827 ssh_runner.go:195] Run: which crictl
	I1210 23:03:58.262424  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.133229723s)
	I1210 23:03:58.262453  257827 ssh_runner.go:235] Completed: which crictl: (1.106665082s)
	I1210 23:03:58.262509  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:58.262461  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1210 23:03:58.262567  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:58.262605  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 23:03:58.289461  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:56.302707  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:56.303151  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:03:56.303210  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:03:56.303270  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:03:56.354123  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:56.354161  218555 cri.go:89] found id: ""
	I1210 23:03:56.354171  218555 logs.go:282] 1 containers: [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:03:56.354230  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.361844  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:03:56.361934  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:03:56.429489  218555 cri.go:89] found id: ""
	I1210 23:03:56.429517  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.429534  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:03:56.429543  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:03:56.429605  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:03:56.485145  218555 cri.go:89] found id: ""
	I1210 23:03:56.485168  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.485188  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:03:56.485196  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:03:56.485248  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:03:56.527598  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:56.527624  218555 cri.go:89] found id: ""
	I1210 23:03:56.527656  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:03:56.527721  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.531877  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:03:56.531946  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:03:56.584604  218555 cri.go:89] found id: ""
	I1210 23:03:56.584633  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.584666  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:03:56.584682  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:03:56.584746  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:03:56.636847  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:56.636869  218555 cri.go:89] found id: ""
	I1210 23:03:56.636882  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:03:56.636945  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:03:56.641998  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:03:56.642078  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:03:56.690194  218555 cri.go:89] found id: ""
	I1210 23:03:56.690225  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.690235  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:03:56.690243  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:03:56.690308  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:03:56.745479  218555 cri.go:89] found id: ""
	I1210 23:03:56.745509  218555 logs.go:282] 0 containers: []
	W1210 23:03:56.745520  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:03:56.745533  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:03:56.745549  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:03:56.773917  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:03:56.773961  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:03:56.850291  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:03:56.850315  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:03:56.850339  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:03:56.896229  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:03:56.896261  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:03:56.970979  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:03:56.971016  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:03:57.012693  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:03:57.012720  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:03:57.075057  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:03:57.075102  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:03:57.129214  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:03:57.129240  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:03:59.769754  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:03:56.115403  252278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:03:56.119690  252278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1210 23:03:56.119706  252278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:03:56.132915  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:03:57.212383  252278 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.079424658s)
	I1210 23:03:57.212423  252278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:03:57.212555  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:57.212637  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-280530 minikube.k8s.io/updated_at=2025_12_10T23_03_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=old-k8s-version-280530 minikube.k8s.io/primary=true
	I1210 23:03:57.224806  252278 ops.go:34] apiserver oom_adj: -16
	I1210 23:03:57.303727  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:57.803773  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:58.304822  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:58.803866  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.303844  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.804341  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:00.304364  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:03:59.403106  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.140472788s)
	I1210 23:03:59.403141  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1210 23:03:59.403144  257827 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.11365071s)
	I1210 23:03:59.403177  257827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:03:59.403223  257827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:03:59.403224  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 23:04:00.660482  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.25716445s)
	I1210 23:04:00.660510  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 23:04:00.660532  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:04:00.660538  257827 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.257279408s)
	I1210 23:04:00.660577  257827 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 23:04:00.660578  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 23:04:00.660672  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:00.664816  257827 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 23:04:00.664851  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 23:04:02.050542  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.389917281s)
	I1210 23:04:02.050569  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1210 23:04:02.050594  257827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:04:02.050673  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 23:04:03.264896  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.214198674s)
	I1210 23:04:03.264925  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 23:04:03.264962  257827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:04:03.265015  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 23:04:01.348582  215904 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066383086s)
	W1210 23:04:01.348624  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 23:04:01.348634  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:01.348665  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:01.387134  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:01.387172  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:01.417234  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:01.417259  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:03.947217  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:04.771523  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1210 23:04:04.771580  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:04.771638  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:04.814384  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:04.814409  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:04.814415  218555 cri.go:89] found id: ""
	I1210 23:04:04.814424  218555 logs.go:282] 2 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:04:04.814482  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.820086  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.825265  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:04.825341  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:00.803874  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:01.304333  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:01.804505  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:02.304446  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:02.804739  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:03.303757  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:03.803797  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.303828  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.804064  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:05.303793  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:04.349322  257827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.084282593s)
	I1210 23:04:04.349348  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1210 23:04:04.349377  257827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:04.349424  257827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 23:04:04.912910  257827 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22061-5100/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 23:04:04.912956  257827 cache_images.go:125] Successfully loaded all cached images
	I1210 23:04:04.912963  257827 cache_images.go:94] duration metric: took 8.822978565s to LoadCachedImages
	I1210 23:04:04.912978  257827 kubeadm.go:935] updating node { 192.168.94.2  8443 v1.35.0-beta.0 crio true true} ...
	I1210 23:04:04.913101  257827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-092439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:04:04.913188  257827 ssh_runner.go:195] Run: crio config
	I1210 23:04:04.969467  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:04:04.969494  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:04:04.969516  257827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:04:04.969550  257827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-092439 NodeName:no-preload-092439 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:04:04.969712  257827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-092439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:04:04.969781  257827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:04:04.979515  257827 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1210 23:04:04.979581  257827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:04:04.989346  257827 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1210 23:04:04.989437  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1210 23:04:04.989488  257827 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1210 23:04:04.989507  257827 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1210 23:04:04.994506  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1210 23:04:04.994540  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1210 23:04:05.940488  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:04:05.954029  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1210 23:04:05.958090  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1210 23:04:05.958122  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1210 23:04:06.095516  257827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1210 23:04:06.101458  257827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1210 23:04:06.101500  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1210 23:04:06.309174  257827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:04:06.318411  257827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 23:04:06.332713  257827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 23:04:06.447145  257827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1210 23:04:06.460959  257827 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:04:06.464968  257827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:04:06.475858  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:06.562681  257827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:06.590530  257827 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439 for IP: 192.168.94.2
	I1210 23:04:06.590553  257827 certs.go:195] generating shared ca certs ...
	I1210 23:04:06.590573  257827 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.590751  257827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:04:06.590807  257827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:04:06.590821  257827 certs.go:257] generating profile certs ...
	I1210 23:04:06.590882  257827 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key
	I1210 23:04:06.590910  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt with IP's: []
	I1210 23:04:06.679320  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt ...
	I1210 23:04:06.679356  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt: {Name:mk6e999ddf9fb4e249c890267ece03810e3898c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.679595  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key ...
	I1210 23:04:06.679616  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.key: {Name:mk2e9c19d38df27e7f3571b8ba29f662af106455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.679772  257827 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d
	I1210 23:04:06.679797  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1210 23:04:06.892693  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d ...
	I1210 23:04:06.892719  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d: {Name:mk1979b8721bfea485b133d2aa14d24a9ab2e0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.892877  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d ...
	I1210 23:04:06.892892  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d: {Name:mkaf4c646957b7644aec53558fd1134dd056f4a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.892973  257827 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt.8d04d23d -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt
	I1210 23:04:06.893058  257827 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key.8d04d23d -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key
	I1210 23:04:06.893137  257827 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key
	I1210 23:04:06.893154  257827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt with IP's: []
	I1210 23:04:06.949878  257827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt ...
	I1210 23:04:06.949905  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt: {Name:mkc46ec8a783c13fcfec4a1a70fed06549840b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.950064  257827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key ...
	I1210 23:04:06.950077  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key: {Name:mk141f21e851fac890a6275a10731cc4766b17cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:06.950248  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:04:06.950289  257827 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:04:06.950299  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:04:06.950323  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:04:06.950364  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:04:06.950387  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:04:06.950424  257827 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:04:06.951027  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:04:06.970688  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:04:06.988617  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:04:07.006629  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:04:07.024964  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 23:04:07.042748  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:04:07.060192  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:04:07.077920  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 23:04:07.095394  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:04:07.117344  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:04:07.136548  257827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:04:07.155327  257827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:04:07.168880  257827 ssh_runner.go:195] Run: openssl version
	I1210 23:04:07.175126  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.182711  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:04:07.190453  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.194350  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.194399  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:04:07.229862  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:04:07.238436  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:04:07.246102  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.253589  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:04:07.261136  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.265071  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.265127  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:04:07.299174  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:04:07.307493  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:04:07.315601  257827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.323376  257827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:04:07.331271  257827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.335133  257827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.335215  257827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:04:07.373366  257827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:04:07.381239  257827 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:04:07.388791  257827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:04:07.392512  257827 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:04:07.392562  257827 kubeadm.go:401] StartCluster: {Name:no-preload-092439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-092439 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:04:07.392638  257827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:04:07.392699  257827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:04:07.421340  257827 cri.go:89] found id: ""
	I1210 23:04:07.421407  257827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:04:07.430132  257827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:04:07.438656  257827 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:04:07.438718  257827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:04:07.447325  257827 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:04:07.447355  257827 kubeadm.go:158] found existing configuration files:
	
	I1210 23:04:07.447401  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:04:07.455875  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:04:07.455933  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:04:07.464244  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:04:07.472284  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:04:07.472344  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:04:07.479571  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:04:07.487086  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:04:07.487148  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:04:07.494438  257827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:04:07.502227  257827 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:04:07.502281  257827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:04:07.509860  257827 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:04:07.546842  257827 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 23:04:07.546934  257827 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:04:07.609756  257827 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:04:07.609889  257827 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:04:07.609957  257827 kubeadm.go:319] OS: Linux
	I1210 23:04:07.610000  257827 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:04:07.610074  257827 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:04:07.610139  257827 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:04:07.610213  257827 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:04:07.610303  257827 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:04:07.610383  257827 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:04:07.610433  257827 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:04:07.610481  257827 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:04:07.667693  257827 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:04:07.667845  257827 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:04:07.667982  257827 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:04:07.680343  257827 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:04:07.682304  257827 out.go:252]   - Generating certificates and keys ...
	I1210 23:04:07.682417  257827 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:04:07.682537  257827 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:04:07.763288  257827 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:04:07.801270  257827 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:04:07.834237  257827 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:04:07.929206  257827 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:04:08.043149  257827 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:04:08.043332  257827 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-092439] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:04:08.231301  257827 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:04:08.231511  257827 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-092439] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:04:08.370113  257827 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:04:08.455113  257827 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:04:08.539395  257827 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:04:08.539523  257827 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:04:08.647549  257827 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:04:08.744162  257827 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:04:08.892449  257827 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:04:09.052806  257827 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:04:09.068716  257827 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:04:09.069449  257827 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:04:09.073614  257827 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:04:04.736558  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:33534->192.168.103.2:8443: read: connection reset by peer
	I1210 23:04:04.736629  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:04.736715  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:04.766262  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:04.766283  215904 cri.go:89] found id: "b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	I1210 23:04:04.766292  215904 cri.go:89] found id: ""
	I1210 23:04:04.766300  215904 logs.go:282] 2 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]
	I1210 23:04:04.766365  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.770528  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.774565  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:04.774629  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:04.807796  215904 cri.go:89] found id: ""
	I1210 23:04:04.807821  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.807832  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:04.807841  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:04.807897  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:04.846478  215904 cri.go:89] found id: ""
	I1210 23:04:04.846505  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.846521  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:04.846529  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:04.846595  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:04.881898  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:04.881922  215904 cri.go:89] found id: ""
	I1210 23:04:04.881932  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:04.881988  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.886959  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:04.887035  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:04.918619  215904 cri.go:89] found id: ""
	I1210 23:04:04.918639  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.918669  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:04.918677  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:04.918725  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:04.946483  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:04.946500  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:04.946504  215904 cri.go:89] found id: ""
	I1210 23:04:04.946510  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:04:04.946554  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.951076  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.956120  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:04.956176  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:04.986385  215904 cri.go:89] found id: ""
	I1210 23:04:04.986409  215904 logs.go:282] 0 containers: []
	W1210 23:04:04.986427  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:04.986434  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:04.986494  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:05.018045  215904 cri.go:89] found id: ""
	I1210 23:04:05.018070  215904 logs.go:282] 0 containers: []
	W1210 23:04:05.018080  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:05.018098  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:05.018113  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:05.057611  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:05.057653  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:05.094938  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:05.094980  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:05.133407  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:05.133434  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:05.173626  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:05.173677  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:05.213162  215904 logs.go:123] Gathering logs for kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2] ...
	I1210 23:04:05.213193  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	W1210 23:04:05.243729  215904 logs.go:130] failed kube-apiserver [b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2": Process exited with status 1
	stdout:
	
	stderr:
	E1210 23:04:05.241175    6038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist" containerID="b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	time="2025-12-10T23:04:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1210 23:04:05.241175    6038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist" containerID="b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2"
	time="2025-12-10T23:04:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2\": container with ID starting with b09ec4cbfc4992c9bad4d7e90e9c69aa3a6dc78ffd398b3d55673848ed843df2 not found: ID does not exist"
	
	** /stderr **
	I1210 23:04:05.243749  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:05.243764  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:05.301919  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:05.301951  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:05.407800  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:05.407829  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:05.425479  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:05.425508  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:05.488947  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:07.990337  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:07.990742  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:07.990799  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:07.990846  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:08.020486  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:08.020506  215904 cri.go:89] found id: ""
	I1210 23:04:08.020514  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:08.020557  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.024490  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:08.024540  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:08.051556  215904 cri.go:89] found id: ""
	I1210 23:04:08.051579  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.051590  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:08.051599  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:08.051686  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:08.079097  215904 cri.go:89] found id: ""
	I1210 23:04:08.079126  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.079138  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:08.079147  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:08.079207  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:08.106624  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:08.106716  215904 cri.go:89] found id: ""
	I1210 23:04:08.106738  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:08.106793  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.110755  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:08.110819  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:08.137616  215904 cri.go:89] found id: ""
	I1210 23:04:08.137655  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.137668  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:08.137676  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:08.137739  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:08.165553  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:08.165578  215904 cri.go:89] found id: "d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:08.165584  215904 cri.go:89] found id: ""
	I1210 23:04:08.165593  215904 logs.go:282] 2 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032]
	I1210 23:04:08.165669  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.169908  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:08.173777  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:08.173843  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:08.208979  215904 cri.go:89] found id: ""
	I1210 23:04:08.209002  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.209013  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:08.209020  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:08.209074  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:08.237600  215904 cri.go:89] found id: ""
	I1210 23:04:08.237625  215904 logs.go:282] 0 containers: []
	W1210 23:04:08.237636  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:08.237682  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:08.237703  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:08.254453  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:08.254491  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:08.311903  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:08.311923  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:08.311938  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:08.345268  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:08.345295  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:08.375988  215904 logs.go:123] Gathering logs for kube-controller-manager [d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032] ...
	I1210 23:04:08.376018  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4e3fa031f4f50e01a55c2b81912f22cf704cf34a2ab9c2998f9a9c1a91b8032"
	I1210 23:04:08.404349  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:08.404377  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:08.434947  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:08.434973  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:08.517221  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:08.517267  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:08.552270  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:08.552299  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:05.804802  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:06.303770  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:06.804356  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:07.304719  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:07.804695  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:08.303769  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:08.804436  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:09.304364  252278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:09.374894  252278 kubeadm.go:1114] duration metric: took 12.162385999s to wait for elevateKubeSystemPrivileges
	I1210 23:04:09.374938  252278 kubeadm.go:403] duration metric: took 22.35389953s to StartCluster
	I1210 23:04:09.374962  252278 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:09.375036  252278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:04:09.376119  252278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:09.376380  252278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:04:09.376403  252278 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:04:09.376823  252278 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:04:09.376904  252278 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:04:09.376914  252278 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-280530"
	I1210 23:04:09.376932  252278 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-280530"
	I1210 23:04:09.376950  252278 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-280530"
	I1210 23:04:09.376964  252278 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:04:09.376965  252278 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-280530"
	I1210 23:04:09.377323  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.377496  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.378843  252278 out.go:179] * Verifying Kubernetes components...
	I1210 23:04:09.380112  252278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:09.404782  252278 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:04:04.871973  218555 cri.go:89] found id: ""
	I1210 23:04:04.872000  218555 logs.go:282] 0 containers: []
	W1210 23:04:04.872010  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:04.872015  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:04.872075  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:04.915030  218555 cri.go:89] found id: ""
	I1210 23:04:04.915058  218555 logs.go:282] 0 containers: []
	W1210 23:04:04.915069  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:04.915078  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:04.915137  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:04.955844  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:04.955868  218555 cri.go:89] found id: ""
	I1210 23:04:04.955877  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:04.955933  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:04.959845  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:04.959895  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:05.002577  218555 cri.go:89] found id: ""
	I1210 23:04:05.002607  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.002617  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:05.002626  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:05.002698  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:05.046807  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:05.046829  218555 cri.go:89] found id: ""
	I1210 23:04:05.046839  218555 logs.go:282] 1 containers: [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:05.046900  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:05.051532  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:05.051595  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:05.104937  218555 cri.go:89] found id: ""
	I1210 23:04:05.104966  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.104976  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:05.104984  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:05.105050  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:05.156907  218555 cri.go:89] found id: ""
	I1210 23:04:05.157151  218555 logs.go:282] 0 containers: []
	W1210 23:04:05.157174  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:05.157193  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:05.157228  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:05.275392  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:05.275428  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 23:04:09.405004  252278 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-280530"
	I1210 23:04:09.405047  252278 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:04:09.405501  252278 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:04:09.406343  252278 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:09.406365  252278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:04:09.406431  252278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280530
	I1210 23:04:09.436330  252278 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:09.436413  252278 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:04:09.436568  252278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280530
	I1210 23:04:09.442122  252278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/old-k8s-version-280530/id_rsa Username:docker}
	I1210 23:04:09.469470  252278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/old-k8s-version-280530/id_rsa Username:docker}
	I1210 23:04:09.489566  252278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:04:09.540605  252278 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:09.558840  252278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:09.582939  252278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:09.720245  252278 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 23:04:09.721449  252278 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-280530" to be "Ready" ...
	I1210 23:04:09.956687  252278 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:04:09.957810  252278 addons.go:530] duration metric: took 580.985826ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:04:10.225579  252278 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-280530" context rescaled to 1 replicas
	I1210 23:04:09.075098  257827 out.go:252]   - Booting up control plane ...
	I1210 23:04:09.075188  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:04:09.075276  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:04:09.076239  257827 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:04:09.090309  257827 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:04:09.090418  257827 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:04:09.097006  257827 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:04:09.097227  257827 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:04:09.097317  257827 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:04:09.201119  257827 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:04:09.201254  257827 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:04:10.203826  257827 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002380824s
	I1210 23:04:10.208576  257827 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:04:10.208724  257827 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 23:04:10.208851  257827 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:04:10.208957  257827 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:04:10.713840  257827 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.113715ms
	I1210 23:04:12.357039  257827 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.148380489s
	I1210 23:04:11.108288  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:11.108725  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:11.108786  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:11.108841  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:11.137853  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:11.137874  215904 cri.go:89] found id: ""
	I1210 23:04:11.137883  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:11.137942  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.142681  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:11.142757  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:11.170313  215904 cri.go:89] found id: ""
	I1210 23:04:11.170340  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.170352  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:11.170360  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:11.170417  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:11.198253  215904 cri.go:89] found id: ""
	I1210 23:04:11.198275  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.198285  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:11.198292  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:11.198359  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:11.228495  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:11.228519  215904 cri.go:89] found id: ""
	I1210 23:04:11.228528  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:11.228584  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.233253  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:11.233319  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:11.260462  215904 cri.go:89] found id: ""
	I1210 23:04:11.260485  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.260493  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:11.260499  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:11.260554  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:11.287583  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:11.287601  215904 cri.go:89] found id: ""
	I1210 23:04:11.287608  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:11.287672  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:11.291507  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:11.291565  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:11.317608  215904 cri.go:89] found id: ""
	I1210 23:04:11.317634  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.317658  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:11.317666  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:11.317727  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:11.344042  215904 cri.go:89] found id: ""
	I1210 23:04:11.344064  215904 logs.go:282] 0 containers: []
	W1210 23:04:11.344072  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:11.344082  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:11.344094  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:11.374057  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:11.374085  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:11.399166  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:11.399191  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:11.428446  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:11.428476  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:11.488778  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:11.488808  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:11.522188  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:11.522220  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:11.627739  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:11.627771  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:11.647722  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:11.647752  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:11.714232  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:14.210770  257827 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002132406s
	I1210 23:04:14.229198  257827 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:04:14.240219  257827 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:04:14.250123  257827 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:04:14.250376  257827 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-092439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:04:14.258231  257827 kubeadm.go:319] [bootstrap-token] Using token: c62cuz.u4c8h8kjomii0rr4
	W1210 23:04:11.724319  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	W1210 23:04:13.724674  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:14.259935  257827 out.go:252]   - Configuring RBAC rules ...
	I1210 23:04:14.260078  257827 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:04:14.263057  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:04:14.268157  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:04:14.270677  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:04:14.274177  257827 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:04:14.276735  257827 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:04:14.616365  257827 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:04:15.034562  257827 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:04:15.616168  257827 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:04:15.617273  257827 kubeadm.go:319] 
	I1210 23:04:15.617402  257827 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:04:15.617423  257827 kubeadm.go:319] 
	I1210 23:04:15.617529  257827 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:04:15.617540  257827 kubeadm.go:319] 
	I1210 23:04:15.617575  257827 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:04:15.617719  257827 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:04:15.617796  257827 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:04:15.617805  257827 kubeadm.go:319] 
	I1210 23:04:15.617905  257827 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:04:15.617926  257827 kubeadm.go:319] 
	I1210 23:04:15.617994  257827 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:04:15.618004  257827 kubeadm.go:319] 
	I1210 23:04:15.618087  257827 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:04:15.618185  257827 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:04:15.618305  257827 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:04:15.618318  257827 kubeadm.go:319] 
	I1210 23:04:15.618447  257827 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:04:15.618550  257827 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:04:15.618558  257827 kubeadm.go:319] 
	I1210 23:04:15.618678  257827 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c62cuz.u4c8h8kjomii0rr4 \
	I1210 23:04:15.618829  257827 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:04:15.618884  257827 kubeadm.go:319] 	--control-plane 
	I1210 23:04:15.618893  257827 kubeadm.go:319] 
	I1210 23:04:15.619011  257827 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:04:15.619030  257827 kubeadm.go:319] 
	I1210 23:04:15.619155  257827 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c62cuz.u4c8h8kjomii0rr4 \
	I1210 23:04:15.619330  257827 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:04:15.620915  257827 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:04:15.621015  257827 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:04:15.621044  257827 cni.go:84] Creating CNI manager for ""
	I1210 23:04:15.621055  257827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:04:15.622692  257827 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:04:15.624109  257827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:04:15.628452  257827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 23:04:15.628471  257827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:04:15.643562  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:04:15.849052  257827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:04:15.849142  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:15.849158  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-092439 minikube.k8s.io/updated_at=2025_12_10T23_04_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=no-preload-092439 minikube.k8s.io/primary=true
	I1210 23:04:15.939485  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:15.939526  257827 ops.go:34] apiserver oom_adj: -16
	I1210 23:04:16.440475  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:16.939769  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:17.440424  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:17.940471  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:18.440380  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:18.939972  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:14.214907  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:14.215310  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:14.215355  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:14.215400  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:14.247354  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:14.247377  215904 cri.go:89] found id: ""
	I1210 23:04:14.247386  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:14.247460  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.252422  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:14.252493  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:14.282161  215904 cri.go:89] found id: ""
	I1210 23:04:14.282185  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.282197  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:14.282205  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:14.282258  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:14.309933  215904 cri.go:89] found id: ""
	I1210 23:04:14.309957  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.309975  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:14.309981  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:14.310036  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:14.339819  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:14.339845  215904 cri.go:89] found id: ""
	I1210 23:04:14.339855  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:14.339913  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.345157  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:14.345231  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:14.376119  215904 cri.go:89] found id: ""
	I1210 23:04:14.376140  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.376153  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:14.376159  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:14.376210  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:14.408446  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:14.408465  215904 cri.go:89] found id: ""
	I1210 23:04:14.408473  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:14.408524  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:14.413095  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:14.413169  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:14.441278  215904 cri.go:89] found id: ""
	I1210 23:04:14.441306  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.441317  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:14.441326  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:14.441393  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:14.469271  215904 cri.go:89] found id: ""
	I1210 23:04:14.469294  215904 logs.go:282] 0 containers: []
	W1210 23:04:14.469304  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:14.469316  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:14.469329  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:14.498656  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:14.498685  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:14.585001  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:14.585032  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:14.600268  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:14.600293  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:14.672208  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:14.672232  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:14.672252  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:14.704082  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:14.704121  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:14.732691  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:14.732724  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:14.761212  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:14.761239  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:17.315740  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:17.316149  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:17.316200  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:17.316250  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:17.343893  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:17.343917  215904 cri.go:89] found id: ""
	I1210 23:04:17.343926  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:17.343985  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.347771  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:17.347836  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:17.375349  215904 cri.go:89] found id: ""
	I1210 23:04:17.375373  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.375381  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:17.375389  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:17.375445  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:17.402671  215904 cri.go:89] found id: ""
	I1210 23:04:17.402694  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.402702  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:17.402708  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:17.402751  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:17.428187  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:17.428211  215904 cri.go:89] found id: ""
	I1210 23:04:17.428219  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:17.428265  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.432134  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:17.432195  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:17.460760  215904 cri.go:89] found id: ""
	I1210 23:04:17.460786  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.460797  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:17.460804  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:17.460880  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:17.490355  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:17.490384  215904 cri.go:89] found id: ""
	I1210 23:04:17.490392  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:17.490450  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:17.494661  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:17.494723  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:17.523365  215904 cri.go:89] found id: ""
	I1210 23:04:17.523392  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.523401  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:17.523406  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:17.523454  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:17.551475  215904 cri.go:89] found id: ""
	I1210 23:04:17.551502  215904 logs.go:282] 0 containers: []
	W1210 23:04:17.551517  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:17.551528  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:17.551542  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:17.577804  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:17.577829  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:17.625326  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:17.625356  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:17.656176  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:17.656209  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:17.744974  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:17.745012  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:17.760342  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:17.760367  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:17.816031  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:17.816058  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:17.816074  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:17.848696  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:17.848722  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:15.350708  218555 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.075259648s)
	W1210 23:04:15.350745  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1210 23:04:15.350755  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:15.350778  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:15.429494  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:15.429529  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:15.446885  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:15.446911  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:15.484414  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:04:15.484451  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:15.522898  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:15.522928  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:15.557158  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:15.557188  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:15.604917  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:15.604959  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:18.146306  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:18.146762  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:18.146824  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:18.146888  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:18.183129  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:18.183161  218555 cri.go:89] found id: "8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	I1210 23:04:18.183167  218555 cri.go:89] found id: ""
	I1210 23:04:18.183177  218555 logs.go:282] 2 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]
	I1210 23:04:18.183253  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.187317  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.190819  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:18.190880  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:18.224530  218555 cri.go:89] found id: ""
	I1210 23:04:18.224553  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.224564  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:18.224571  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:18.224627  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:18.263252  218555 cri.go:89] found id: ""
	I1210 23:04:18.263281  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.263293  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:18.263301  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:18.263370  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:18.298950  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:18.298973  218555 cri.go:89] found id: ""
	I1210 23:04:18.298983  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:18.299039  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.302729  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:18.302779  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:18.337305  218555 cri.go:89] found id: ""
	I1210 23:04:18.337330  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.337340  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:18.337347  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:18.337410  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:18.371265  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:18.371290  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:18.371297  218555 cri.go:89] found id: ""
	I1210 23:04:18.371307  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:18.371361  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.375054  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:18.378512  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:18.378555  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:18.412265  218555 cri.go:89] found id: ""
	I1210 23:04:18.412286  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.412294  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:18.412300  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:18.412356  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:18.446822  218555 cri.go:89] found id: ""
	I1210 23:04:18.446844  218555 logs.go:282] 0 containers: []
	W1210 23:04:18.446852  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:18.446868  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:18.446883  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:18.500892  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:18.500926  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:18.601504  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:18.601543  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:18.635703  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:18.635745  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:18.674968  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:18.675001  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:18.691746  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:18.691771  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:18.751864  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:18.751885  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:18.751897  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:18.789927  218555 logs.go:123] Gathering logs for kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9] ...
	I1210 23:04:18.789956  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	W1210 23:04:18.824468  218555 logs.go:130] failed kube-apiserver [8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9": Process exited with status 1
	stdout:
	
	stderr:
	E1210 23:04:18.821978    6181 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist" containerID="8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	time="2025-12-10T23:04:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1210 23:04:18.821978    6181 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist" containerID="8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9"
	time="2025-12-10T23:04:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9\": container with ID starting with 8f245335f821e70949e4316cc2f74a1d1094061d174a26f67186ec8d1b15b6c9 not found: ID does not exist"
	
	** /stderr **
	I1210 23:04:18.824489  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:18.824501  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:18.899449  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:18.899486  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	W1210 23:04:15.724718  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	W1210 23:04:18.225055  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:19.439513  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:19.939749  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:20.439743  257827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:04:20.516682  257827 kubeadm.go:1114] duration metric: took 4.6676217s to wait for elevateKubeSystemPrivileges
	I1210 23:04:20.516746  257827 kubeadm.go:403] duration metric: took 13.124163827s to StartCluster
	I1210 23:04:20.516771  257827 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:20.516843  257827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:04:20.518173  257827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:04:20.518415  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:04:20.518446  257827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:04:20.518505  257827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:04:20.518586  257827 addons.go:70] Setting storage-provisioner=true in profile "no-preload-092439"
	I1210 23:04:20.518608  257827 addons.go:239] Setting addon storage-provisioner=true in "no-preload-092439"
	I1210 23:04:20.518613  257827 addons.go:70] Setting default-storageclass=true in profile "no-preload-092439"
	I1210 23:04:20.518637  257827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-092439"
	I1210 23:04:20.518653  257827 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:04:20.518701  257827 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:04:20.518961  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.519183  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.520383  257827 out.go:179] * Verifying Kubernetes components...
	I1210 23:04:20.522842  257827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:04:20.544345  257827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:04:20.545686  257827 addons.go:239] Setting addon default-storageclass=true in "no-preload-092439"
	I1210 23:04:20.545727  257827 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:04:20.546139  257827 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:04:20.546152  257827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:20.546171  257827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:04:20.546226  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:04:20.575111  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:04:20.577631  257827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:20.577662  257827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:04:20.577727  257827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:04:20.601974  257827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:04:20.613891  257827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:04:20.679131  257827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:04:20.697026  257827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:04:20.717697  257827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:04:20.827240  257827 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1210 23:04:20.829161  257827 node_ready.go:35] waiting up to 6m0s for node "no-preload-092439" to be "Ready" ...
	I1210 23:04:21.069799  257827 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:04:21.071186  257827 addons.go:530] duration metric: took 552.673088ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:04:21.333093  257827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-092439" context rescaled to 1 replicas
	W1210 23:04:22.832392  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	I1210 23:04:20.377720  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:20.378221  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:20.378280  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:20.378341  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:20.405573  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:20.405596  215904 cri.go:89] found id: ""
	I1210 23:04:20.405604  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:20.405677  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.409672  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:20.409728  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:20.435730  215904 cri.go:89] found id: ""
	I1210 23:04:20.435763  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.435775  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:20.435784  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:20.435840  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:20.471327  215904 cri.go:89] found id: ""
	I1210 23:04:20.471354  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.471365  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:20.471373  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:20.471431  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:20.503868  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:20.503894  215904 cri.go:89] found id: ""
	I1210 23:04:20.503904  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:20.503961  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.508945  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:20.509011  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:20.543224  215904 cri.go:89] found id: ""
	I1210 23:04:20.543252  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.543263  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:20.543270  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:20.543330  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:20.588065  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:20.588089  215904 cri.go:89] found id: ""
	I1210 23:04:20.588099  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:20.588153  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:20.593043  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:20.593108  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:20.627151  215904 cri.go:89] found id: ""
	I1210 23:04:20.627177  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.627192  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:20.627200  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:20.627258  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:20.661786  215904 cri.go:89] found id: ""
	I1210 23:04:20.661810  215904 logs.go:282] 0 containers: []
	W1210 23:04:20.661821  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:20.661832  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:20.661848  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:20.699119  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:20.699149  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:20.736189  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:20.736229  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:20.804403  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:20.804454  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:20.848630  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:20.848690  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:20.968517  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:20.968547  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:20.985573  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:20.985601  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:21.075156  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:21.075180  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:21.075194  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:23.612780  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:23.613189  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:23.613238  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:23.613289  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:23.639136  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:23.639154  215904 cri.go:89] found id: ""
	I1210 23:04:23.639161  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:23.639214  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.643284  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:23.643348  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:23.670005  215904 cri.go:89] found id: ""
	I1210 23:04:23.670029  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.670039  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:23.670047  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:23.670122  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:23.697062  215904 cri.go:89] found id: ""
	I1210 23:04:23.697082  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.697090  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:23.697095  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:23.697151  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:23.724278  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:23.724295  215904 cri.go:89] found id: ""
	I1210 23:04:23.724302  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:23.724346  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.728182  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:23.728260  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:23.754134  215904 cri.go:89] found id: ""
	I1210 23:04:23.754156  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.754166  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:23.754182  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:23.754240  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:23.780985  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:23.781006  215904 cri.go:89] found id: ""
	I1210 23:04:23.781013  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:23.781057  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:23.785047  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:23.785116  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:23.813633  215904 cri.go:89] found id: ""
	I1210 23:04:23.813671  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.813683  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:23.813692  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:23.813742  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:23.841172  215904 cri.go:89] found id: ""
	I1210 23:04:23.841198  215904 logs.go:282] 0 containers: []
	W1210 23:04:23.841206  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:23.841217  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:23.841228  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:23.926005  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:23.926053  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:23.941148  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:23.941176  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:23.997042  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:23.997069  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:23.997090  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:24.026943  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:24.026970  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:24.053337  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:24.053364  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:24.080338  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:24.080368  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:24.131017  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:24.131050  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:21.434291  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:21.434794  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:21.434855  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:21.434915  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:21.483862  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:21.483889  218555 cri.go:89] found id: ""
	I1210 23:04:21.483899  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:21.483963  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.488277  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:21.488345  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:21.530619  218555 cri.go:89] found id: ""
	I1210 23:04:21.530654  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.530664  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:21.530672  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:21.530735  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:21.578503  218555 cri.go:89] found id: ""
	I1210 23:04:21.578532  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.578543  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:21.578551  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:21.578613  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:21.620112  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:21.620133  218555 cri.go:89] found id: ""
	I1210 23:04:21.620142  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:21.620193  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.624047  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:21.624124  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:21.665733  218555 cri.go:89] found id: ""
	I1210 23:04:21.665773  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.665783  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:21.665792  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:21.665853  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:21.703400  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:21.703424  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:21.703430  218555 cri.go:89] found id: ""
	I1210 23:04:21.703439  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:21.703502  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.708298  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:21.712940  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:21.713037  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:21.757517  218555 cri.go:89] found id: ""
	I1210 23:04:21.757545  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.757556  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:21.757565  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:21.757620  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:21.802700  218555 cri.go:89] found id: ""
	I1210 23:04:21.802728  218555 logs.go:282] 0 containers: []
	W1210 23:04:21.802739  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:21.802758  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:21.802772  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:21.824310  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:21.824346  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:21.910114  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:21.910136  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:21.910151  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:21.957023  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:21.957055  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:22.032034  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:22.032065  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:22.079037  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:22.079075  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:22.121144  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:22.121169  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:22.206276  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:22.206319  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:22.256540  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:22.256607  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 23:04:20.726175  252278 node_ready.go:57] node "old-k8s-version-280530" has "Ready":"False" status (will retry)
	I1210 23:04:22.227596  252278 node_ready.go:49] node "old-k8s-version-280530" is "Ready"
	I1210 23:04:22.227630  252278 node_ready.go:38] duration metric: took 12.506150778s for node "old-k8s-version-280530" to be "Ready" ...
	I1210 23:04:22.227668  252278 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:04:22.227794  252278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:04:22.246236  252278 api_server.go:72] duration metric: took 12.869795533s to wait for apiserver process to appear ...
	I1210 23:04:22.246414  252278 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:04:22.246443  252278 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:04:22.252360  252278 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:04:22.253663  252278 api_server.go:141] control plane version: v1.28.0
	I1210 23:04:22.253690  252278 api_server.go:131] duration metric: took 7.264266ms to wait for apiserver health ...
	I1210 23:04:22.253701  252278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:04:22.259514  252278 system_pods.go:59] 8 kube-system pods found
	I1210 23:04:22.259562  252278 system_pods.go:61] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.259570  252278 system_pods.go:61] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.259577  252278 system_pods.go:61] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.259583  252278 system_pods.go:61] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.259593  252278 system_pods.go:61] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.259603  252278 system_pods.go:61] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.259608  252278 system_pods.go:61] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.259615  252278 system_pods.go:61] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.259623  252278 system_pods.go:74] duration metric: took 5.914903ms to wait for pod list to return data ...
	I1210 23:04:22.259653  252278 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:04:22.262517  252278 default_sa.go:45] found service account: "default"
	I1210 23:04:22.262539  252278 default_sa.go:55] duration metric: took 2.87884ms for default service account to be created ...
	I1210 23:04:22.262566  252278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:04:22.266939  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.266973  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.266981  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.266989  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.266994  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.267000  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.267005  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.267010  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.267016  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.267039  252278 retry.go:31] will retry after 232.170594ms: missing components: kube-dns
	I1210 23:04:22.503158  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.503188  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.503193  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.503200  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.503203  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.503207  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.503210  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.503214  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.503219  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.503234  252278 retry.go:31] will retry after 310.786078ms: missing components: kube-dns
	I1210 23:04:22.818459  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:22.818489  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:22.818494  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:22.818500  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:22.818504  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:22.818508  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:22.818511  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:22.818520  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:22.818524  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:22.818543  252278 retry.go:31] will retry after 438.77602ms: missing components: kube-dns
	I1210 23:04:23.261929  252278 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:23.261954  252278 system_pods.go:89] "coredns-5dd5756b68-6mzkn" [e58a1fae-28a7-4ee0-9b47-d218809cf39b] Running
	I1210 23:04:23.261960  252278 system_pods.go:89] "etcd-old-k8s-version-280530" [d3756733-1e3c-4994-b21d-6621b60b9eba] Running
	I1210 23:04:23.261963  252278 system_pods.go:89] "kindnet-4g5xn" [da5d63e5-1d59-4260-a616-bb1e532d73ef] Running
	I1210 23:04:23.261971  252278 system_pods.go:89] "kube-apiserver-old-k8s-version-280530" [b24cc109-d464-409f-a051-4ec31045ebfd] Running
	I1210 23:04:23.261975  252278 system_pods.go:89] "kube-controller-manager-old-k8s-version-280530" [4c0737e2-947a-4588-9cc6-f1d203be3790] Running
	I1210 23:04:23.261980  252278 system_pods.go:89] "kube-proxy-nvgl4" [d9f46688-73a7-4697-a4d4-b65d4e225487] Running
	I1210 23:04:23.261985  252278 system_pods.go:89] "kube-scheduler-old-k8s-version-280530" [e6ed5104-70f3-455e-b760-f0c987ef88e5] Running
	I1210 23:04:23.261990  252278 system_pods.go:89] "storage-provisioner" [32e8e488-81a6-4639-bc89-f5107ea52fdd] Running
	I1210 23:04:23.262000  252278 system_pods.go:126] duration metric: took 999.423474ms to wait for k8s-apps to be running ...
	I1210 23:04:23.262015  252278 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:04:23.262065  252278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:04:23.275623  252278 system_svc.go:56] duration metric: took 13.599963ms WaitForService to wait for kubelet
	I1210 23:04:23.275661  252278 kubeadm.go:587] duration metric: took 13.899211788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:04:23.275683  252278 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:04:23.278190  252278 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:04:23.278213  252278 node_conditions.go:123] node cpu capacity is 8
	I1210 23:04:23.278228  252278 node_conditions.go:105] duration metric: took 2.539808ms to run NodePressure ...
	I1210 23:04:23.278240  252278 start.go:242] waiting for startup goroutines ...
	I1210 23:04:23.278247  252278 start.go:247] waiting for cluster config update ...
	I1210 23:04:23.278257  252278 start.go:256] writing updated cluster config ...
	I1210 23:04:23.278512  252278 ssh_runner.go:195] Run: rm -f paused
	I1210 23:04:23.282247  252278 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:23.285903  252278 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.290252  252278 pod_ready.go:94] pod "coredns-5dd5756b68-6mzkn" is "Ready"
	I1210 23:04:23.290271  252278 pod_ready.go:86] duration metric: took 4.345986ms for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.292604  252278 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.296200  252278 pod_ready.go:94] pod "etcd-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.296217  252278 pod_ready.go:86] duration metric: took 3.594598ms for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.298786  252278 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.302170  252278 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.302191  252278 pod_ready.go:86] duration metric: took 3.382379ms for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.304575  252278 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.686161  252278 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-280530" is "Ready"
	I1210 23:04:23.686188  252278 pod_ready.go:86] duration metric: took 381.597868ms for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:23.887143  252278 pod_ready.go:83] waiting for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.287298  252278 pod_ready.go:94] pod "kube-proxy-nvgl4" is "Ready"
	I1210 23:04:24.287321  252278 pod_ready.go:86] duration metric: took 400.155224ms for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.487315  252278 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.886853  252278 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-280530" is "Ready"
	I1210 23:04:24.886878  252278 pod_ready.go:86] duration metric: took 399.53855ms for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:24.886893  252278 pod_ready.go:40] duration metric: took 1.604622158s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:24.939852  252278 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 23:04:24.941685  252278 out.go:203] 
	W1210 23:04:24.942850  252278 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 23:04:24.944090  252278 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 23:04:24.945963  252278 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-280530" cluster and "default" namespace by default
	W1210 23:04:25.332176  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	W1210 23:04:27.832487  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	I1210 23:04:26.662713  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:26.663201  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:26.663259  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:26.663312  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:26.700771  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:26.700796  215904 cri.go:89] found id: ""
	I1210 23:04:26.700805  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:26.700851  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.705227  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:26.705304  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:26.739982  215904 cri.go:89] found id: ""
	I1210 23:04:26.740009  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.740022  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:26.740030  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:26.740096  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:26.772637  215904 cri.go:89] found id: ""
	I1210 23:04:26.772690  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.772700  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:26.772706  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:26.772754  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:26.801207  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:26.801226  215904 cri.go:89] found id: ""
	I1210 23:04:26.801233  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:26.801279  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.805308  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:26.805374  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:26.834182  215904 cri.go:89] found id: ""
	I1210 23:04:26.834202  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.834210  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:26.834215  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:26.834259  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:26.862361  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:26.862386  215904 cri.go:89] found id: ""
	I1210 23:04:26.862396  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:26.862454  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:26.867248  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:26.867323  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:26.895929  215904 cri.go:89] found id: ""
	I1210 23:04:26.895957  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.895966  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:26.895972  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:26.896024  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:26.923094  215904 cri.go:89] found id: ""
	I1210 23:04:26.923118  215904 logs.go:282] 0 containers: []
	W1210 23:04:26.923127  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:26.923137  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:26.923150  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:26.970888  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:26.970921  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:27.001389  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:27.001426  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:27.092258  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:27.092289  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:27.107514  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:27.107539  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:27.164299  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:27.164320  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:27.164333  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:27.195053  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:27.195081  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:27.222683  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:27.222714  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:24.853720  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:24.854133  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:24.854186  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:24.854248  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:24.890336  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:24.890365  218555 cri.go:89] found id: ""
	I1210 23:04:24.890375  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:24.890433  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:24.894437  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:24.894493  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:24.935833  218555 cri.go:89] found id: ""
	I1210 23:04:24.935860  218555 logs.go:282] 0 containers: []
	W1210 23:04:24.935871  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:24.935879  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:24.935934  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:24.978365  218555 cri.go:89] found id: ""
	I1210 23:04:24.978393  218555 logs.go:282] 0 containers: []
	W1210 23:04:24.978404  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:24.978412  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:24.978480  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:25.016297  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:25.016332  218555 cri.go:89] found id: ""
	I1210 23:04:25.016340  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:25.016396  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.020319  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:25.020391  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:25.056899  218555 cri.go:89] found id: ""
	I1210 23:04:25.056924  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.056934  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:25.056942  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:25.057004  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:25.101908  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:25.101928  218555 cri.go:89] found id: "4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:25.101938  218555 cri.go:89] found id: ""
	I1210 23:04:25.101946  218555 logs.go:282] 2 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3]
	I1210 23:04:25.102006  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.105872  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:25.109469  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:25.109543  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:25.145153  218555 cri.go:89] found id: ""
	I1210 23:04:25.145182  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.145191  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:25.145197  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:25.145259  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:25.188965  218555 cri.go:89] found id: ""
	I1210 23:04:25.188987  218555 logs.go:282] 0 containers: []
	W1210 23:04:25.188997  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:25.189016  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:25.189030  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:25.266753  218555 logs.go:123] Gathering logs for kube-controller-manager [4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3] ...
	I1210 23:04:25.266783  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4aa4fbb008209d8984992355cb315d8af929fcac93046624af4b6448429413f3"
	I1210 23:04:25.301586  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:25.301611  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:25.393253  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:25.393283  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:25.410575  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:25.410604  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:25.448312  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:25.448338  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:25.484181  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:25.484210  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:25.536412  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:25.536443  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:25.574897  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:25.574928  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:25.634417  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:28.134855  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:28.135298  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:28.135374  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:28.135437  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:28.170790  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:28.170813  218555 cri.go:89] found id: ""
	I1210 23:04:28.170823  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:28.170879  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.174912  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:28.174979  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:28.208763  218555 cri.go:89] found id: ""
	I1210 23:04:28.208784  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.208791  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:28.208796  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:28.208842  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:28.243376  218555 cri.go:89] found id: ""
	I1210 23:04:28.243400  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.243409  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:28.243417  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:28.243475  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:28.278280  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:28.278300  218555 cri.go:89] found id: ""
	I1210 23:04:28.278306  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:28.278357  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.282105  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:28.282161  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:28.316679  218555 cri.go:89] found id: ""
	I1210 23:04:28.316702  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.316710  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:28.316716  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:28.316772  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:28.352448  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:28.352468  218555 cri.go:89] found id: ""
	I1210 23:04:28.352477  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:28.352539  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:28.356325  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:28.356387  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:28.391264  218555 cri.go:89] found id: ""
	I1210 23:04:28.391288  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.391299  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:28.391307  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:28.391373  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:28.426690  218555 cri.go:89] found id: ""
	I1210 23:04:28.426718  218555 logs.go:282] 0 containers: []
	W1210 23:04:28.426730  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:28.426742  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:28.426761  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:28.443934  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:28.443966  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:28.503704  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:28.503728  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:28.503744  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:28.542130  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:28.542161  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:28.621555  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:28.621586  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:28.656874  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:28.656901  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:28.708102  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:28.708131  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:28.746621  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:28.746657  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 23:04:29.832846  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	W1210 23:04:32.332587  257827 node_ready.go:57] node "no-preload-092439" has "Ready":"False" status (will retry)
	I1210 23:04:33.332117  257827 node_ready.go:49] node "no-preload-092439" is "Ready"
	I1210 23:04:33.332147  257827 node_ready.go:38] duration metric: took 12.502956055s for node "no-preload-092439" to be "Ready" ...
	I1210 23:04:33.332162  257827 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:04:33.332212  257827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:04:33.348533  257827 api_server.go:72] duration metric: took 12.830050627s to wait for apiserver process to appear ...
	I1210 23:04:33.348561  257827 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:04:33.348582  257827 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 23:04:33.354271  257827 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1210 23:04:33.355829  257827 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 23:04:33.355856  257827 api_server.go:131] duration metric: took 7.28796ms to wait for apiserver health ...
	I1210 23:04:33.355866  257827 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:04:33.361292  257827 system_pods.go:59] 8 kube-system pods found
	I1210 23:04:33.361330  257827 system_pods.go:61] "coredns-7d764666f9-5tpb8" [fbc2ce49-615f-42cc-bd9d-806000e42928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:33.361339  257827 system_pods.go:61] "etcd-no-preload-092439" [3bf608f2-c85f-45d8-ac6d-2f6a28dcab23] Running
	I1210 23:04:33.361347  257827 system_pods.go:61] "kindnet-k4tzd" [54a85499-1f1a-461c-ad48-93a3f600bd39] Running
	I1210 23:04:33.361354  257827 system_pods.go:61] "kube-apiserver-no-preload-092439" [9cd90560-e18c-48f9-b039-b4fcf78cd20a] Running
	I1210 23:04:33.361368  257827 system_pods.go:61] "kube-controller-manager-no-preload-092439" [f1159f3f-4fb4-4538-92e3-972a304606e6] Running
	I1210 23:04:33.361373  257827 system_pods.go:61] "kube-proxy-gqz42" [41e804a2-5521-4737-9e8e-2634e81b3bca] Running
	I1210 23:04:33.361378  257827 system_pods.go:61] "kube-scheduler-no-preload-092439" [b5905ebb-f136-4ee4-a5bb-43a5a032ead6] Running
	I1210 23:04:33.361386  257827 system_pods.go:61] "storage-provisioner" [96a4d309-cf31-43d0-8c93-6c924a7f1647] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:33.361400  257827 system_pods.go:74] duration metric: took 5.525279ms to wait for pod list to return data ...
	I1210 23:04:33.361410  257827 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:04:33.364698  257827 default_sa.go:45] found service account: "default"
	I1210 23:04:33.364772  257827 default_sa.go:55] duration metric: took 3.354517ms for default service account to be created ...
	I1210 23:04:33.364812  257827 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:04:33.461387  257827 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:33.461425  257827 system_pods.go:89] "coredns-7d764666f9-5tpb8" [fbc2ce49-615f-42cc-bd9d-806000e42928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:33.461433  257827 system_pods.go:89] "etcd-no-preload-092439" [3bf608f2-c85f-45d8-ac6d-2f6a28dcab23] Running
	I1210 23:04:33.461442  257827 system_pods.go:89] "kindnet-k4tzd" [54a85499-1f1a-461c-ad48-93a3f600bd39] Running
	I1210 23:04:33.461448  257827 system_pods.go:89] "kube-apiserver-no-preload-092439" [9cd90560-e18c-48f9-b039-b4fcf78cd20a] Running
	I1210 23:04:33.461463  257827 system_pods.go:89] "kube-controller-manager-no-preload-092439" [f1159f3f-4fb4-4538-92e3-972a304606e6] Running
	I1210 23:04:33.461471  257827 system_pods.go:89] "kube-proxy-gqz42" [41e804a2-5521-4737-9e8e-2634e81b3bca] Running
	I1210 23:04:33.461476  257827 system_pods.go:89] "kube-scheduler-no-preload-092439" [b5905ebb-f136-4ee4-a5bb-43a5a032ead6] Running
	I1210 23:04:33.461483  257827 system_pods.go:89] "storage-provisioner" [96a4d309-cf31-43d0-8c93-6c924a7f1647] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:33.461514  257827 retry.go:31] will retry after 217.695308ms: missing components: kube-dns
	I1210 23:04:33.682797  257827 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:33.682825  257827 system_pods.go:89] "coredns-7d764666f9-5tpb8" [fbc2ce49-615f-42cc-bd9d-806000e42928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:04:33.682831  257827 system_pods.go:89] "etcd-no-preload-092439" [3bf608f2-c85f-45d8-ac6d-2f6a28dcab23] Running
	I1210 23:04:33.682837  257827 system_pods.go:89] "kindnet-k4tzd" [54a85499-1f1a-461c-ad48-93a3f600bd39] Running
	I1210 23:04:33.682842  257827 system_pods.go:89] "kube-apiserver-no-preload-092439" [9cd90560-e18c-48f9-b039-b4fcf78cd20a] Running
	I1210 23:04:33.682846  257827 system_pods.go:89] "kube-controller-manager-no-preload-092439" [f1159f3f-4fb4-4538-92e3-972a304606e6] Running
	I1210 23:04:33.682849  257827 system_pods.go:89] "kube-proxy-gqz42" [41e804a2-5521-4737-9e8e-2634e81b3bca] Running
	I1210 23:04:33.682852  257827 system_pods.go:89] "kube-scheduler-no-preload-092439" [b5905ebb-f136-4ee4-a5bb-43a5a032ead6] Running
	I1210 23:04:33.682857  257827 system_pods.go:89] "storage-provisioner" [96a4d309-cf31-43d0-8c93-6c924a7f1647] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:04:33.682869  257827 retry.go:31] will retry after 280.967248ms: missing components: kube-dns
	I1210 23:04:29.748898  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:29.749417  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:29.749474  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:29.749539  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:29.776929  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:29.776951  215904 cri.go:89] found id: ""
	I1210 23:04:29.776961  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:29.777025  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:29.781046  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:29.781156  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:29.807689  215904 cri.go:89] found id: ""
	I1210 23:04:29.807716  215904 logs.go:282] 0 containers: []
	W1210 23:04:29.807725  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:29.807731  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:29.807780  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:29.834722  215904 cri.go:89] found id: ""
	I1210 23:04:29.834745  215904 logs.go:282] 0 containers: []
	W1210 23:04:29.834753  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:29.834758  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:29.834816  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:29.862731  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:29.862752  215904 cri.go:89] found id: ""
	I1210 23:04:29.862762  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:29.862815  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:29.866913  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:29.866980  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:29.894047  215904 cri.go:89] found id: ""
	I1210 23:04:29.894071  215904 logs.go:282] 0 containers: []
	W1210 23:04:29.894081  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:29.894088  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:29.894151  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:29.920907  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:29.920931  215904 cri.go:89] found id: ""
	I1210 23:04:29.920941  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:29.921002  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:29.924827  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:29.924892  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:29.950520  215904 cri.go:89] found id: ""
	I1210 23:04:29.950545  215904 logs.go:282] 0 containers: []
	W1210 23:04:29.950554  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:29.950560  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:29.950633  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:29.977296  215904 cri.go:89] found id: ""
	I1210 23:04:29.977318  215904 logs.go:282] 0 containers: []
	W1210 23:04:29.977326  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:29.977340  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:29.977351  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:30.065400  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:30.065431  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:30.081849  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:30.081883  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:30.138951  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:30.138976  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:30.138992  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:30.169068  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:30.169093  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:30.195853  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:30.195880  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:30.221971  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:30.221995  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:30.266851  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:30.266879  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:32.797571  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:32.797999  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:32.798049  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:32.798106  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:32.825995  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:32.826015  215904 cri.go:89] found id: ""
	I1210 23:04:32.826031  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:32.826089  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:32.830074  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:32.830151  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:32.863922  215904 cri.go:89] found id: ""
	I1210 23:04:32.863944  215904 logs.go:282] 0 containers: []
	W1210 23:04:32.863952  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:32.863958  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:32.864010  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:32.893026  215904 cri.go:89] found id: ""
	I1210 23:04:32.893053  215904 logs.go:282] 0 containers: []
	W1210 23:04:32.893063  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:32.893078  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:32.893140  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:32.921870  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:32.921891  215904 cri.go:89] found id: ""
	I1210 23:04:32.921901  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:32.921955  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:32.926257  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:32.926325  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:32.956857  215904 cri.go:89] found id: ""
	I1210 23:04:32.956878  215904 logs.go:282] 0 containers: []
	W1210 23:04:32.956886  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:32.956893  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:32.956950  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:32.994053  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:32.994072  215904 cri.go:89] found id: ""
	I1210 23:04:32.994081  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:32.994134  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:32.999080  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:32.999147  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:33.031725  215904 cri.go:89] found id: ""
	I1210 23:04:33.031747  215904 logs.go:282] 0 containers: []
	W1210 23:04:33.031754  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:33.031760  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:33.031807  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:33.070974  215904 cri.go:89] found id: ""
	I1210 23:04:33.071111  215904 logs.go:282] 0 containers: []
	W1210 23:04:33.071130  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:33.071143  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:33.071156  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:33.125767  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:33.125801  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:33.161002  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:33.161029  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:33.252830  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:33.252860  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:33.267675  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:33.267700  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:33.331737  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:33.331760  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:33.331776  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:33.378065  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:33.378114  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:33.414126  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:33.414153  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:31.345337  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:31.345716  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:31.345774  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:31.345819  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:31.382265  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:31.382285  218555 cri.go:89] found id: ""
	I1210 23:04:31.382293  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:31.382341  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:31.386075  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:31.386143  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:31.420599  218555 cri.go:89] found id: ""
	I1210 23:04:31.420625  218555 logs.go:282] 0 containers: []
	W1210 23:04:31.420635  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:31.420655  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:31.420715  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:31.455478  218555 cri.go:89] found id: ""
	I1210 23:04:31.455498  218555 logs.go:282] 0 containers: []
	W1210 23:04:31.455506  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:31.455511  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:31.455561  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:31.489503  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:31.489523  218555 cri.go:89] found id: ""
	I1210 23:04:31.489529  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:31.489586  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:31.493234  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:31.493284  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:31.526407  218555 cri.go:89] found id: ""
	I1210 23:04:31.526430  218555 logs.go:282] 0 containers: []
	W1210 23:04:31.526438  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:31.526443  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:31.526503  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:31.561113  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:31.561144  218555 cri.go:89] found id: ""
	I1210 23:04:31.561153  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:31.561215  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:31.564953  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:31.565021  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:31.598962  218555 cri.go:89] found id: ""
	I1210 23:04:31.598984  218555 logs.go:282] 0 containers: []
	W1210 23:04:31.598991  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:31.598997  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:31.599041  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:31.634446  218555 cri.go:89] found id: ""
	I1210 23:04:31.634471  218555 logs.go:282] 0 containers: []
	W1210 23:04:31.634483  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:31.634495  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:31.634512  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:31.650499  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:31.650522  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:31.709581  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:31.709601  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:31.709620  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:31.746916  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:31.746942  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:31.820502  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:31.820536  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:31.856123  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:31.856150  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:31.906801  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:31.906834  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:31.944962  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:31.944987  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:34.546734  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:34.547126  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:34.547176  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:34.547221  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:34.585007  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:34.585030  218555 cri.go:89] found id: ""
	I1210 23:04:34.585039  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:34.585105  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:34.588869  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:34.588934  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:34.626851  218555 cri.go:89] found id: ""
	I1210 23:04:34.626875  218555 logs.go:282] 0 containers: []
	W1210 23:04:34.626884  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:34.626891  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:34.626947  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:34.663940  218555 cri.go:89] found id: ""
	I1210 23:04:34.663961  218555 logs.go:282] 0 containers: []
	W1210 23:04:34.663969  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:34.663974  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:34.664018  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:34.701304  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:34.701324  218555 cri.go:89] found id: ""
	I1210 23:04:34.701334  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:34.701389  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:34.705765  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:34.705833  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:34.741920  218555 cri.go:89] found id: ""
	I1210 23:04:34.741946  218555 logs.go:282] 0 containers: []
	W1210 23:04:34.741959  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:34.741966  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:34.742026  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:34.777619  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:34.777676  218555 cri.go:89] found id: ""
	I1210 23:04:34.777687  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:34.777751  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:34.781584  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:34.781661  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:34.824504  218555 cri.go:89] found id: ""
	I1210 23:04:34.824527  218555 logs.go:282] 0 containers: []
	W1210 23:04:34.824536  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:34.824543  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:34.824600  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:33.967488  257827 system_pods.go:86] 8 kube-system pods found
	I1210 23:04:33.967514  257827 system_pods.go:89] "coredns-7d764666f9-5tpb8" [fbc2ce49-615f-42cc-bd9d-806000e42928] Running
	I1210 23:04:33.967519  257827 system_pods.go:89] "etcd-no-preload-092439" [3bf608f2-c85f-45d8-ac6d-2f6a28dcab23] Running
	I1210 23:04:33.967523  257827 system_pods.go:89] "kindnet-k4tzd" [54a85499-1f1a-461c-ad48-93a3f600bd39] Running
	I1210 23:04:33.967527  257827 system_pods.go:89] "kube-apiserver-no-preload-092439" [9cd90560-e18c-48f9-b039-b4fcf78cd20a] Running
	I1210 23:04:33.967531  257827 system_pods.go:89] "kube-controller-manager-no-preload-092439" [f1159f3f-4fb4-4538-92e3-972a304606e6] Running
	I1210 23:04:33.967537  257827 system_pods.go:89] "kube-proxy-gqz42" [41e804a2-5521-4737-9e8e-2634e81b3bca] Running
	I1210 23:04:33.967542  257827 system_pods.go:89] "kube-scheduler-no-preload-092439" [b5905ebb-f136-4ee4-a5bb-43a5a032ead6] Running
	I1210 23:04:33.967547  257827 system_pods.go:89] "storage-provisioner" [96a4d309-cf31-43d0-8c93-6c924a7f1647] Running
	I1210 23:04:33.967557  257827 system_pods.go:126] duration metric: took 602.718571ms to wait for k8s-apps to be running ...
	I1210 23:04:33.967571  257827 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:04:33.967617  257827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:04:33.983464  257827 system_svc.go:56] duration metric: took 15.877085ms WaitForService to wait for kubelet
	I1210 23:04:33.983496  257827 kubeadm.go:587] duration metric: took 13.46501876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:04:33.983519  257827 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:04:33.986310  257827 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:04:33.986344  257827 node_conditions.go:123] node cpu capacity is 8
	I1210 23:04:33.986363  257827 node_conditions.go:105] duration metric: took 2.838304ms to run NodePressure ...
	I1210 23:04:33.986376  257827 start.go:242] waiting for startup goroutines ...
	I1210 23:04:33.986385  257827 start.go:247] waiting for cluster config update ...
	I1210 23:04:33.986405  257827 start.go:256] writing updated cluster config ...
	I1210 23:04:33.986745  257827 ssh_runner.go:195] Run: rm -f paused
	I1210 23:04:33.991305  257827 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:34.068139  257827 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5tpb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.073104  257827 pod_ready.go:94] pod "coredns-7d764666f9-5tpb8" is "Ready"
	I1210 23:04:34.073134  257827 pod_ready.go:86] duration metric: took 4.962409ms for pod "coredns-7d764666f9-5tpb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.075552  257827 pod_ready.go:83] waiting for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.081985  257827 pod_ready.go:94] pod "etcd-no-preload-092439" is "Ready"
	I1210 23:04:34.082008  257827 pod_ready.go:86] duration metric: took 6.434517ms for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.084413  257827 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.089079  257827 pod_ready.go:94] pod "kube-apiserver-no-preload-092439" is "Ready"
	I1210 23:04:34.089100  257827 pod_ready.go:86] duration metric: took 4.656892ms for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.091222  257827 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.396125  257827 pod_ready.go:94] pod "kube-controller-manager-no-preload-092439" is "Ready"
	I1210 23:04:34.396148  257827 pod_ready.go:86] duration metric: took 304.895023ms for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.596402  257827 pod_ready.go:83] waiting for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:34.995326  257827 pod_ready.go:94] pod "kube-proxy-gqz42" is "Ready"
	I1210 23:04:34.995351  257827 pod_ready.go:86] duration metric: took 398.914614ms for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:35.195488  257827 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:35.595227  257827 pod_ready.go:94] pod "kube-scheduler-no-preload-092439" is "Ready"
	I1210 23:04:35.595251  257827 pod_ready.go:86] duration metric: took 399.739664ms for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:04:35.595263  257827 pod_ready.go:40] duration metric: took 1.603925803s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:04:35.639280  257827 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:04:35.641155  257827 out.go:179] * Done! kubectl is now configured to use "no-preload-092439" cluster and "default" namespace by default
	I1210 23:04:35.943802  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:35.944325  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:35.944383  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:35.944438  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:35.975947  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:35.975970  215904 cri.go:89] found id: ""
	I1210 23:04:35.975981  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:35.976037  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:35.980245  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:35.980312  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:36.009170  215904 cri.go:89] found id: ""
	I1210 23:04:36.009196  215904 logs.go:282] 0 containers: []
	W1210 23:04:36.009207  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:36.009215  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:36.009273  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:36.035737  215904 cri.go:89] found id: ""
	I1210 23:04:36.035766  215904 logs.go:282] 0 containers: []
	W1210 23:04:36.035779  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:36.035787  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:36.035847  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:36.065242  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:36.065260  215904 cri.go:89] found id: ""
	I1210 23:04:36.065273  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:36.065320  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:36.069749  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:36.069807  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:36.096095  215904 cri.go:89] found id: ""
	I1210 23:04:36.096119  215904 logs.go:282] 0 containers: []
	W1210 23:04:36.096131  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:36.096139  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:36.096190  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:36.125790  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:36.125814  215904 cri.go:89] found id: ""
	I1210 23:04:36.125828  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:36.125898  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:36.130896  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:36.130971  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:36.162319  215904 cri.go:89] found id: ""
	I1210 23:04:36.162362  215904 logs.go:282] 0 containers: []
	W1210 23:04:36.162373  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:36.162381  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:36.162457  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:36.190526  215904 cri.go:89] found id: ""
	I1210 23:04:36.190547  215904 logs.go:282] 0 containers: []
	W1210 23:04:36.190555  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:36.190569  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:36.190584  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:36.221899  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:36.221939  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:36.311893  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:36.311925  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:36.326628  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:36.326663  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:36.381722  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:36.381742  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:36.381755  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:36.410343  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:36.410376  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:36.436559  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:36.436583  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:36.463018  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:36.463041  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:39.007823  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:39.008233  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:39.008289  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:39.008338  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:39.034788  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:39.034814  215904 cri.go:89] found id: ""
	I1210 23:04:39.034825  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:39.034874  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:39.038945  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:39.039001  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:39.065955  215904 cri.go:89] found id: ""
	I1210 23:04:39.065978  215904 logs.go:282] 0 containers: []
	W1210 23:04:39.065989  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:39.065997  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:39.066076  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:39.092793  215904 cri.go:89] found id: ""
	I1210 23:04:39.092817  215904 logs.go:282] 0 containers: []
	W1210 23:04:39.092826  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:39.092831  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:39.092884  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:39.118156  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:39.118202  215904 cri.go:89] found id: ""
	I1210 23:04:39.118215  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:39.118268  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:39.122283  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:39.122341  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:39.149437  215904 cri.go:89] found id: ""
	I1210 23:04:39.149455  215904 logs.go:282] 0 containers: []
	W1210 23:04:39.149463  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:39.149469  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:39.149515  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:34.860542  218555 cri.go:89] found id: ""
	I1210 23:04:34.860565  218555 logs.go:282] 0 containers: []
	W1210 23:04:34.860575  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:34.860586  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:34.860604  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:34.913331  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:34.913368  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:34.965592  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:34.965631  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:35.007041  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:35.007072  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:35.103718  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:35.103755  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:35.121147  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:35.121197  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:35.179121  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:35.179150  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:35.179163  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:35.215875  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:35.215905  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:37.790931  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:37.791338  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:37.791392  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:37.791447  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:37.825569  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:37.825589  218555 cri.go:89] found id: ""
	I1210 23:04:37.825597  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:37.825667  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:37.829369  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:37.829443  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:37.864017  218555 cri.go:89] found id: ""
	I1210 23:04:37.864042  218555 logs.go:282] 0 containers: []
	W1210 23:04:37.864049  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:37.864055  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:37.864103  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:37.900044  218555 cri.go:89] found id: ""
	I1210 23:04:37.900067  218555 logs.go:282] 0 containers: []
	W1210 23:04:37.900078  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:37.900086  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:37.900141  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:37.933769  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:37.933792  218555 cri.go:89] found id: ""
	I1210 23:04:37.933801  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:37.933853  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:37.937564  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:37.937619  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:37.973617  218555 cri.go:89] found id: ""
	I1210 23:04:37.973637  218555 logs.go:282] 0 containers: []
	W1210 23:04:37.973657  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:37.973665  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:37.973714  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:38.009352  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:38.009380  218555 cri.go:89] found id: ""
	I1210 23:04:38.009389  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:38.009446  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:38.013031  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:38.013098  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:38.047574  218555 cri.go:89] found id: ""
	I1210 23:04:38.047597  218555 logs.go:282] 0 containers: []
	W1210 23:04:38.047605  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:38.047610  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:38.047675  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:38.082313  218555 cri.go:89] found id: ""
	I1210 23:04:38.082335  218555 logs.go:282] 0 containers: []
	W1210 23:04:38.082342  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:38.082351  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:38.082369  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:38.118481  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:38.118506  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:38.193521  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:38.193552  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:38.228882  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:38.228908  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:38.275187  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:38.275216  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:38.312555  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:38.312581  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:38.414123  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:38.414159  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:38.430391  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:38.430418  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:38.491078  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:39.176029  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:39.176047  215904 cri.go:89] found id: ""
	I1210 23:04:39.176054  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:39.176107  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:39.180019  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:39.180080  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:39.205562  215904 cri.go:89] found id: ""
	I1210 23:04:39.205595  215904 logs.go:282] 0 containers: []
	W1210 23:04:39.205608  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:39.205616  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:39.205689  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:39.232357  215904 cri.go:89] found id: ""
	I1210 23:04:39.232381  215904 logs.go:282] 0 containers: []
	W1210 23:04:39.232389  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:39.232398  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:39.232411  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:39.288179  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:39.288198  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:39.288210  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:39.321187  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:39.321221  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:39.349494  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:39.349525  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:39.376187  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:39.376210  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:39.421996  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:39.422028  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:39.452688  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:39.452721  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:39.536165  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:39.536195  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:42.052720  215904 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:04:42.053160  215904 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1210 23:04:42.053211  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:42.053263  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:42.080456  215904 cri.go:89] found id: "23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:42.080480  215904 cri.go:89] found id: ""
	I1210 23:04:42.080490  215904 logs.go:282] 1 containers: [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b]
	I1210 23:04:42.080550  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:42.084407  215904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:42.084473  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:42.110130  215904 cri.go:89] found id: ""
	I1210 23:04:42.110158  215904 logs.go:282] 0 containers: []
	W1210 23:04:42.110170  215904 logs.go:284] No container was found matching "etcd"
	I1210 23:04:42.110178  215904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:42.110237  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:42.137190  215904 cri.go:89] found id: ""
	I1210 23:04:42.137217  215904 logs.go:282] 0 containers: []
	W1210 23:04:42.137228  215904 logs.go:284] No container was found matching "coredns"
	I1210 23:04:42.137236  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:42.137292  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:42.163182  215904 cri.go:89] found id: "bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:42.163207  215904 cri.go:89] found id: ""
	I1210 23:04:42.163217  215904 logs.go:282] 1 containers: [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5]
	I1210 23:04:42.163274  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:42.167036  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:42.167104  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:42.193632  215904 cri.go:89] found id: ""
	I1210 23:04:42.193672  215904 logs.go:282] 0 containers: []
	W1210 23:04:42.193682  215904 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:42.193689  215904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:42.193749  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:42.220941  215904 cri.go:89] found id: "cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:42.220966  215904 cri.go:89] found id: ""
	I1210 23:04:42.220977  215904 logs.go:282] 1 containers: [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6]
	I1210 23:04:42.221029  215904 ssh_runner.go:195] Run: which crictl
	I1210 23:04:42.225014  215904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:42.225084  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:42.252497  215904 cri.go:89] found id: ""
	I1210 23:04:42.252516  215904 logs.go:282] 0 containers: []
	W1210 23:04:42.252524  215904 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:42.252530  215904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:42.252580  215904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:42.279709  215904 cri.go:89] found id: ""
	I1210 23:04:42.279732  215904 logs.go:282] 0 containers: []
	W1210 23:04:42.279741  215904 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:42.279750  215904 logs.go:123] Gathering logs for kube-controller-manager [cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6] ...
	I1210 23:04:42.279762  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbf74797a4091df7b7ce11961091178c4970856a490edd5c16c90a86e5f36ac6"
	I1210 23:04:42.305856  215904 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:42.305880  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:42.351905  215904 logs.go:123] Gathering logs for container status ...
	I1210 23:04:42.351933  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:42.382529  215904 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:42.382553  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:42.471259  215904 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:42.471291  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:42.485926  215904 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:42.485951  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:42.541288  215904 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:42.541309  215904 logs.go:123] Gathering logs for kube-apiserver [23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b] ...
	I1210 23:04:42.541323  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 23b31058567afe9dd23c6bb40dc30844426921e8f8dd46ed329468d6e783d06b"
	I1210 23:04:42.572187  215904 logs.go:123] Gathering logs for kube-scheduler [bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5] ...
	I1210 23:04:42.572213  215904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf348df8b3924fdba83b3bde900e0a469623c42c86d1e56f1799d0e576327ae5"
	I1210 23:04:40.992291  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:40.992813  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:40.992870  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:40.992919  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:41.027129  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:41.027150  218555 cri.go:89] found id: ""
	I1210 23:04:41.027158  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:41.027215  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:41.031104  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:41.031169  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:41.064619  218555 cri.go:89] found id: ""
	I1210 23:04:41.064657  218555 logs.go:282] 0 containers: []
	W1210 23:04:41.064669  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:41.064677  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:41.064734  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:41.099142  218555 cri.go:89] found id: ""
	I1210 23:04:41.099169  218555 logs.go:282] 0 containers: []
	W1210 23:04:41.099176  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:41.099183  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:41.099242  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:41.133639  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:41.133673  218555 cri.go:89] found id: ""
	I1210 23:04:41.133682  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:41.133748  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:41.137558  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:41.137624  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:41.172098  218555 cri.go:89] found id: ""
	I1210 23:04:41.172123  218555 logs.go:282] 0 containers: []
	W1210 23:04:41.172131  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:41.172138  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:41.172197  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:41.206145  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:41.206163  218555 cri.go:89] found id: ""
	I1210 23:04:41.206176  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:41.206229  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:41.210031  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:41.210104  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:41.246676  218555 cri.go:89] found id: ""
	I1210 23:04:41.246703  218555 logs.go:282] 0 containers: []
	W1210 23:04:41.246714  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:41.246721  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:41.246786  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:41.281426  218555 cri.go:89] found id: ""
	I1210 23:04:41.281453  218555 logs.go:282] 0 containers: []
	W1210 23:04:41.281461  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:41.281470  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:41.281485  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:41.330770  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:41.330805  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:41.368300  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:41.368332  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:41.468519  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:41.468551  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:41.484927  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:41.484955  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:41.545549  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:41.545575  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:41.545595  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:41.584503  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:41.584533  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:41.663171  218555 logs.go:123] Gathering logs for kube-controller-manager [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a] ...
	I1210 23:04:41.663206  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:44.200065  218555 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 23:04:44.200528  218555 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1210 23:04:44.200581  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 23:04:44.200635  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 23:04:44.236073  218555 cri.go:89] found id: "03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:44.236107  218555 cri.go:89] found id: ""
	I1210 23:04:44.236114  218555 logs.go:282] 1 containers: [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437]
	I1210 23:04:44.236161  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:44.240117  218555 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 23:04:44.240179  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 23:04:44.274436  218555 cri.go:89] found id: ""
	I1210 23:04:44.274457  218555 logs.go:282] 0 containers: []
	W1210 23:04:44.274465  218555 logs.go:284] No container was found matching "etcd"
	I1210 23:04:44.274470  218555 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 23:04:44.274516  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 23:04:44.309212  218555 cri.go:89] found id: ""
	I1210 23:04:44.309237  218555 logs.go:282] 0 containers: []
	W1210 23:04:44.309245  218555 logs.go:284] No container was found matching "coredns"
	I1210 23:04:44.309251  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 23:04:44.309299  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 23:04:44.343159  218555 cri.go:89] found id: "c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	I1210 23:04:44.343183  218555 cri.go:89] found id: ""
	I1210 23:04:44.343195  218555 logs.go:282] 1 containers: [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef]
	I1210 23:04:44.343246  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:44.347017  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 23:04:44.347077  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 23:04:44.380509  218555 cri.go:89] found id: ""
	I1210 23:04:44.380531  218555 logs.go:282] 0 containers: []
	W1210 23:04:44.380538  218555 logs.go:284] No container was found matching "kube-proxy"
	I1210 23:04:44.380544  218555 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 23:04:44.380591  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 23:04:44.414432  218555 cri.go:89] found id: "526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a"
	I1210 23:04:44.414453  218555 cri.go:89] found id: ""
	I1210 23:04:44.414463  218555 logs.go:282] 1 containers: [526098a3400bbafa238a5e105a9ff6bedb3b7429658dcca74fead4c83d4db36a]
	I1210 23:04:44.414523  218555 ssh_runner.go:195] Run: which crictl
	I1210 23:04:44.418441  218555 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 23:04:44.418500  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 23:04:44.452974  218555 cri.go:89] found id: ""
	I1210 23:04:44.452998  218555 logs.go:282] 0 containers: []
	W1210 23:04:44.453009  218555 logs.go:284] No container was found matching "kindnet"
	I1210 23:04:44.453016  218555 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 23:04:44.453071  218555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 23:04:44.487769  218555 cri.go:89] found id: ""
	I1210 23:04:44.487792  218555 logs.go:282] 0 containers: []
	W1210 23:04:44.487801  218555 logs.go:284] No container was found matching "storage-provisioner"
	I1210 23:04:44.487812  218555 logs.go:123] Gathering logs for CRI-O ...
	I1210 23:04:44.487827  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 23:04:44.539389  218555 logs.go:123] Gathering logs for container status ...
	I1210 23:04:44.539422  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 23:04:44.577699  218555 logs.go:123] Gathering logs for kubelet ...
	I1210 23:04:44.577726  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 23:04:44.668557  218555 logs.go:123] Gathering logs for dmesg ...
	I1210 23:04:44.668586  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 23:04:44.684874  218555 logs.go:123] Gathering logs for describe nodes ...
	I1210 23:04:44.684900  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 23:04:44.742336  218555 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 23:04:44.742355  218555 logs.go:123] Gathering logs for kube-apiserver [03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437] ...
	I1210 23:04:44.742373  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03382f20de408d7ae2a800860ef41c81464204389976e1e9b74d4332eb1b1437"
	I1210 23:04:44.779402  218555 logs.go:123] Gathering logs for kube-scheduler [c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef] ...
	I1210 23:04:44.779433  218555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c24712a8cb01d1d2056b62b0c9e2228aefb730e2c4d2816abf4229c3e6984cef"
	
	
	==> CRI-O <==
	Dec 10 23:04:33 no-preload-092439 crio[764]: time="2025-12-10T23:04:33.358829151Z" level=info msg="Starting container: 5af5a7c4b55c9c972237e7af5fbb37b376331b3b80359d20536d416e909bcc01" id=c70b824b-23cb-458f-91a6-f36d08b9e472 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:04:33 no-preload-092439 crio[764]: time="2025-12-10T23:04:33.361410145Z" level=info msg="Started container" PID=2818 containerID=5af5a7c4b55c9c972237e7af5fbb37b376331b3b80359d20536d416e909bcc01 description=kube-system/coredns-7d764666f9-5tpb8/coredns id=c70b824b-23cb-458f-91a6-f36d08b9e472 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf467952501eb626fa865094631eb42267df5f0b53a01e62b2840e9d3eda58cf
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.104155119Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b2fcda2e-0e28-49d1-9324-a9ecd8a20f01 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.104242671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.109678689Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:72f8a49406793efe2940b10f88d97952bd03bc1ae6324ead723dfbcf91e242d5 UID:dd3bcee3-92a1-4c68-8569-badd5445456f NetNS:/var/run/netns/220696f5-b1b7-4191-b204-c364ffab2cce Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000904710}] Aliases:map[]}"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.10971332Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.120215241Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:72f8a49406793efe2940b10f88d97952bd03bc1ae6324ead723dfbcf91e242d5 UID:dd3bcee3-92a1-4c68-8569-badd5445456f NetNS:/var/run/netns/220696f5-b1b7-4191-b204-c364ffab2cce Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000904710}] Aliases:map[]}"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.120391236Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.121362569Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.12249925Z" level=info msg="Ran pod sandbox 72f8a49406793efe2940b10f88d97952bd03bc1ae6324ead723dfbcf91e242d5 with infra container: default/busybox/POD" id=b2fcda2e-0e28-49d1-9324-a9ecd8a20f01 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.124310635Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=81b81d0f-108d-46ba-883f-043424c011a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.124437466Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=81b81d0f-108d-46ba-883f-043424c011a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.124483702Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=81b81d0f-108d-46ba-883f-043424c011a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.125696266Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cca010f6-cec3-4ec7-aa49-5aedf47c5583 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:04:36 no-preload-092439 crio[764]: time="2025-12-10T23:04:36.127353774Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.437488703Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cca010f6-cec3-4ec7-aa49-5aedf47c5583 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.438039178Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4fa397df-8c9e-4886-8283-8eeb1c5fc739 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.439457457Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4e65f17-31de-44ae-ad80-6a9ec279402b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.442599911Z" level=info msg="Creating container: default/busybox/busybox" id=7edc04c1-aeb5-459a-bf74-169304420452 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.442748591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.445975259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.446368542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.475519172Z" level=info msg="Created container e3d795bd5dc84d45334074aee0b9936af9eeed77c3249d789ffc64ac6262eec1: default/busybox/busybox" id=7edc04c1-aeb5-459a-bf74-169304420452 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.476111351Z" level=info msg="Starting container: e3d795bd5dc84d45334074aee0b9936af9eeed77c3249d789ffc64ac6262eec1" id=af76182a-a24e-462c-a128-e6864b9fb5c9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:04:37 no-preload-092439 crio[764]: time="2025-12-10T23:04:37.477924491Z" level=info msg="Started container" PID=2893 containerID=e3d795bd5dc84d45334074aee0b9936af9eeed77c3249d789ffc64ac6262eec1 description=default/busybox/busybox id=af76182a-a24e-462c-a128-e6864b9fb5c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72f8a49406793efe2940b10f88d97952bd03bc1ae6324ead723dfbcf91e242d5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e3d795bd5dc84       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   72f8a49406793       busybox                                     default
	5af5a7c4b55c9       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   bf467952501eb       coredns-7d764666f9-5tpb8                    kube-system
	4c8d77e923b77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   fa26b5964995b       storage-provisioner                         kube-system
	f82bd75def1ed       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   2221ba3f625f3       kindnet-k4tzd                               kube-system
	1cd786067196b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   92121bed80757       kube-proxy-gqz42                            kube-system
	7126cfee2c878       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   b18135428ad89       kube-controller-manager-no-preload-092439   kube-system
	254d93b8817d0       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   faf8a888dc436       kube-apiserver-no-preload-092439            kube-system
	127f7dc7f6829       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   a1557c8866094       etcd-no-preload-092439                      kube-system
	1ae9424d9a350       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   40240c9dcf78a       kube-scheduler-no-preload-092439            kube-system
	
	
	==> coredns [5af5a7c4b55c9c972237e7af5fbb37b376331b3b80359d20536d416e909bcc01] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60279 - 32714 "HINFO IN 7380238928996055722.5535671836099184514. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.102897202s
	
	
	==> describe nodes <==
	Name:               no-preload-092439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-092439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=no-preload-092439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_04_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-092439
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:04:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:04:45 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:04:45 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:04:45 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:04:45 +0000   Wed, 10 Dec 2025 23:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-092439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bf869612-dadc-4e0f-a9d5-5bc2846c3b03
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-5tpb8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-092439                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-k4tzd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-092439             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-092439    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gqz42                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-092439             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-092439 event: Registered Node no-preload-092439 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [127f7dc7f6829c0edc7b66be71b3b7e9923d473c24ed3d9980d6d4fdbf069056] <==
	{"level":"warn","ts":"2025-12-10T23:04:11.864537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:04:11.877610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:04:11.883833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:04:11.890157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:04:11.896302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:04:11.944733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T23:04:12.568310Z","caller":"traceutil/trace.go:172","msg":"trace[1927807140] linearizableReadLoop","detail":"{readStateIndex:9; appliedIndex:9; }","duration":"105.393459ms","start":"2025-12-10T23:04:12.462862Z","end":"2025-12-10T23:04:12.568255Z","steps":["trace[1927807140] 'read index received'  (duration: 105.386336ms)","trace[1927807140] 'applied index is now lower than readState.Index'  (duration: 6.215µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:04:12.582228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.122212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"compact_rev_key\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-10T23:04:12.582286Z","caller":"traceutil/trace.go:172","msg":"trace[1687877224] range","detail":"{range_begin:compact_rev_key; range_end:; response_count:0; response_revision:5; }","duration":"151.217056ms","start":"2025-12-10T23:04:12.431059Z","end":"2025-12-10T23:04:12.582276Z","steps":["trace[1687877224] 'agreement among raft nodes before linearized reading'  (duration: 137.301573ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:04:12.582271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.903533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-12-10T23:04:12.582321Z","caller":"traceutil/trace.go:172","msg":"trace[2038302104] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:5; }","duration":"129.965251ms","start":"2025-12-10T23:04:12.452343Z","end":"2025-12-10T23:04:12.582309Z","steps":["trace[2038302104] 'agreement among raft nodes before linearized reading'  (duration: 115.937531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582437Z","caller":"traceutil/trace.go:172","msg":"trace[1090218390] transaction","detail":"{read_only:false; response_revision:6; number_of_response:1; }","duration":"175.373225ms","start":"2025-12-10T23:04:12.407030Z","end":"2025-12-10T23:04:12.582403Z","steps":["trace[1090218390] 'process raft request'  (duration: 161.267788ms)","trace[1090218390] 'compare'  (duration: 13.95531ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:04:12.582508Z","caller":"traceutil/trace.go:172","msg":"trace[2117595505] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"174.449999ms","start":"2025-12-10T23:04:12.408047Z","end":"2025-12-10T23:04:12.582497Z","steps":["trace[2117595505] 'process raft request'  (duration: 174.360109ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582520Z","caller":"traceutil/trace.go:172","msg":"trace[1280074976] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"174.569239ms","start":"2025-12-10T23:04:12.407931Z","end":"2025-12-10T23:04:12.582501Z","steps":["trace[1280074976] 'process raft request'  (duration: 174.461842ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582567Z","caller":"traceutil/trace.go:172","msg":"trace[471232611] transaction","detail":"{read_only:false; response_revision:7; number_of_response:1; }","duration":"174.697707ms","start":"2025-12-10T23:04:12.407862Z","end":"2025-12-10T23:04:12.582559Z","steps":["trace[471232611] 'process raft request'  (duration: 174.491858ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582598Z","caller":"traceutil/trace.go:172","msg":"trace[690073011] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"115.244578ms","start":"2025-12-10T23:04:12.467344Z","end":"2025-12-10T23:04:12.582588Z","steps":["trace[690073011] 'process raft request'  (duration: 115.215499ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582636Z","caller":"traceutil/trace.go:172","msg":"trace[70437320] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"164.549771ms","start":"2025-12-10T23:04:12.418078Z","end":"2025-12-10T23:04:12.582627Z","steps":["trace[70437320] 'process raft request'  (duration: 164.39663ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582698Z","caller":"traceutil/trace.go:172","msg":"trace[1199927105] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"174.601945ms","start":"2025-12-10T23:04:12.408086Z","end":"2025-12-10T23:04:12.582688Z","steps":["trace[1199927105] 'process raft request'  (duration: 174.346847ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582730Z","caller":"traceutil/trace.go:172","msg":"trace[1243631229] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"160.60686ms","start":"2025-12-10T23:04:12.422104Z","end":"2025-12-10T23:04:12.582711Z","steps":["trace[1243631229] 'process raft request'  (duration: 160.397919ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:04:12.582768Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.966406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-10T23:04:12.582796Z","caller":"traceutil/trace.go:172","msg":"trace[553738968] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:15; }","duration":"116.99785ms","start":"2025-12-10T23:04:12.465791Z","end":"2025-12-10T23:04:12.582789Z","steps":["trace[553738968] 'agreement among raft nodes before linearized reading'  (duration: 116.946696ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:04:12.582826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.970031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-12-10T23:04:12.582831Z","caller":"traceutil/trace.go:172","msg":"trace[756138501] transaction","detail":"{read_only:false; response_revision:14; number_of_response:1; }","duration":"115.545058ms","start":"2025-12-10T23:04:12.467265Z","end":"2025-12-10T23:04:12.582810Z","steps":["trace[756138501] 'process raft request'  (duration: 115.268545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.582854Z","caller":"traceutil/trace.go:172","msg":"trace[872810195] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:15; }","duration":"117.006598ms","start":"2025-12-10T23:04:12.465837Z","end":"2025-12-10T23:04:12.582843Z","steps":["trace[872810195] 'agreement among raft nodes before linearized reading'  (duration: 116.949361ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:04:12.583000Z","caller":"traceutil/trace.go:172","msg":"trace[1527432154] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"175.08336ms","start":"2025-12-10T23:04:12.407903Z","end":"2025-12-10T23:04:12.582987Z","steps":["trace[1527432154] 'process raft request'  (duration: 174.472184ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:04:46 up 47 min,  0 user,  load average: 2.52, 2.30, 1.60
	Linux no-preload-092439 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f82bd75def1ed2e163ae1f3a571dff6f9aaae29195f3bf1930f46faeef2254ee] <==
	I1210 23:04:22.389607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:04:22.389974       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 23:04:22.390157       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:04:22.390171       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:04:22.390181       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:04:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:04:22.594276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:04:22.594298       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:04:22.594312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:04:22.594435       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:04:22.894987       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:04:22.895009       1 metrics.go:72] Registering metrics
	I1210 23:04:22.895137       1 controller.go:711] "Syncing nftables rules"
	I1210 23:04:32.595714       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:04:32.595781       1 main.go:301] handling current node
	I1210 23:04:42.597746       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:04:42.597792       1 main.go:301] handling current node
	
	
	==> kube-apiserver [254d93b8817d074c49b2988c4a65cc175faa07f3aa60502d5740d13a486af7d2] <==
	I1210 23:04:12.406526       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 23:04:12.406579       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:04:12.406591       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:04:12.406599       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:04:12.406604       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:04:12.583746       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:04:12.594283       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:04:13.304153       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 23:04:13.308924       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 23:04:13.308945       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:04:13.727919       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:04:13.762425       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:04:13.810803       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 23:04:13.816349       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1210 23:04:13.817197       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:04:13.821184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:04:14.338205       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:04:15.024253       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:04:15.033615       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 23:04:15.041378       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 23:04:20.040675       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:04:20.241152       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:04:20.244836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:04:20.340181       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 23:04:44.883023       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:52512: use of closed network connection
	
	
	==> kube-controller-manager [7126cfee2c8784b991c2c9e15aa9b77b5584fab7edfd4e24e9e47ab427e77eea] <==
	I1210 23:04:19.143052       1 range_allocator.go:177] "Sending events to api server"
	I1210 23:04:19.143056       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143111       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143132       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 23:04:19.143139       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:04:19.143157       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143182       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143571       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143589       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143702       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143759       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143773       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143804       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143817       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143941       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.143962       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.145744       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.152392       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.154306       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-092439" podCIDRs=["10.244.0.0/24"]
	I1210 23:04:19.154751       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:04:19.242284       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:19.242307       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:04:19.242314       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:04:19.255539       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:34.144532       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1cd786067196b0ee628ecbfd1530af6ec1d645f4842ce5688f6bc07567c70455] <==
	I1210 23:04:20.790837       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:04:20.892624       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:04:20.993229       1 shared_informer.go:377] "Caches are synced"
	I1210 23:04:20.993263       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 23:04:20.993380       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:04:21.025790       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:04:21.025957       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:04:21.035544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:04:21.036565       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:04:21.036712       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:04:21.039432       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:04:21.041226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:04:21.041748       1 config.go:309] "Starting node config controller"
	I1210 23:04:21.043405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:04:21.043531       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:04:21.042173       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:04:21.044469       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:04:21.042194       1 config.go:200] "Starting service config controller"
	I1210 23:04:21.044613       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:04:21.142274       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:04:21.145543       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:04:21.145567       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1ae9424d9a3502b0c2f71ad63ad0e3180b248f17fc9d45e41a762518bd43d75e] <==
	E1210 23:04:13.173707       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1210 23:04:13.174448       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 23:04:13.180342       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1210 23:04:13.181022       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 23:04:13.244534       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1210 23:04:13.245462       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 23:04:13.298990       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 23:04:13.300046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 23:04:13.317663       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 23:04:13.318574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 23:04:13.331733       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1210 23:04:13.332601       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1210 23:04:13.353132       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1210 23:04:13.353997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1210 23:04:13.363960       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 23:04:13.364825       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 23:04:13.374804       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1210 23:04:13.375779       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 23:04:13.400821       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 23:04:13.401838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 23:04:13.509474       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1210 23:04:13.510366       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 23:04:13.536598       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 23:04:13.537697       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1210 23:04:16.151483       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:04:20 no-preload-092439 kubelet[2211]: I1210 23:04:20.393142    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54a85499-1f1a-461c-ad48-93a3f600bd39-xtables-lock\") pod \"kindnet-k4tzd\" (UID: \"54a85499-1f1a-461c-ad48-93a3f600bd39\") " pod="kube-system/kindnet-k4tzd"
	Dec 10 23:04:20 no-preload-092439 kubelet[2211]: I1210 23:04:20.393216    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54a85499-1f1a-461c-ad48-93a3f600bd39-lib-modules\") pod \"kindnet-k4tzd\" (UID: \"54a85499-1f1a-461c-ad48-93a3f600bd39\") " pod="kube-system/kindnet-k4tzd"
	Dec 10 23:04:20 no-preload-092439 kubelet[2211]: I1210 23:04:20.393261    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41e804a2-5521-4737-9e8e-2634e81b3bca-kube-proxy\") pod \"kube-proxy-gqz42\" (UID: \"41e804a2-5521-4737-9e8e-2634e81b3bca\") " pod="kube-system/kube-proxy-gqz42"
	Dec 10 23:04:20 no-preload-092439 kubelet[2211]: I1210 23:04:20.393280    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6wb\" (UniqueName: \"kubernetes.io/projected/54a85499-1f1a-461c-ad48-93a3f600bd39-kube-api-access-8z6wb\") pod \"kindnet-k4tzd\" (UID: \"54a85499-1f1a-461c-ad48-93a3f600bd39\") " pod="kube-system/kindnet-k4tzd"
	Dec 10 23:04:20 no-preload-092439 kubelet[2211]: I1210 23:04:20.393301    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41e804a2-5521-4737-9e8e-2634e81b3bca-xtables-lock\") pod \"kube-proxy-gqz42\" (UID: \"41e804a2-5521-4737-9e8e-2634e81b3bca\") " pod="kube-system/kube-proxy-gqz42"
	Dec 10 23:04:22 no-preload-092439 kubelet[2211]: I1210 23:04:22.910971    2211 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-k4tzd" podStartSLOduration=1.409215849 podStartE2EDuration="2.91095509s" podCreationTimestamp="2025-12-10 23:04:20 +0000 UTC" firstStartedPulling="2025-12-10 23:04:20.674466251 +0000 UTC m=+5.894870483" lastFinishedPulling="2025-12-10 23:04:22.176205509 +0000 UTC m=+7.396609724" observedRunningTime="2025-12-10 23:04:22.910862326 +0000 UTC m=+8.131266574" watchObservedRunningTime="2025-12-10 23:04:22.91095509 +0000 UTC m=+8.131359324"
	Dec 10 23:04:22 no-preload-092439 kubelet[2211]: I1210 23:04:22.911127    2211 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gqz42" podStartSLOduration=2.911117413 podStartE2EDuration="2.911117413s" podCreationTimestamp="2025-12-10 23:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:20.90580167 +0000 UTC m=+6.126205904" watchObservedRunningTime="2025-12-10 23:04:22.911117413 +0000 UTC m=+8.131521645"
	Dec 10 23:04:23 no-preload-092439 kubelet[2211]: E1210 23:04:23.058488    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-092439" containerName="kube-apiserver"
	Dec 10 23:04:26 no-preload-092439 kubelet[2211]: E1210 23:04:26.607009    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-092439" containerName="kube-scheduler"
	Dec 10 23:04:26 no-preload-092439 kubelet[2211]: E1210 23:04:26.909889    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-092439" containerName="kube-scheduler"
	Dec 10 23:04:29 no-preload-092439 kubelet[2211]: E1210 23:04:29.057212    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-092439" containerName="etcd"
	Dec 10 23:04:29 no-preload-092439 kubelet[2211]: E1210 23:04:29.549134    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-092439" containerName="kube-controller-manager"
	Dec 10 23:04:32 no-preload-092439 kubelet[2211]: I1210 23:04:32.959943    2211 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: E1210 23:04:33.064272    2211 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-092439" containerName="kube-apiserver"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.086810    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527k7\" (UniqueName: \"kubernetes.io/projected/fbc2ce49-615f-42cc-bd9d-806000e42928-kube-api-access-527k7\") pod \"coredns-7d764666f9-5tpb8\" (UID: \"fbc2ce49-615f-42cc-bd9d-806000e42928\") " pod="kube-system/coredns-7d764666f9-5tpb8"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.086866    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96a4d309-cf31-43d0-8c93-6c924a7f1647-tmp\") pod \"storage-provisioner\" (UID: \"96a4d309-cf31-43d0-8c93-6c924a7f1647\") " pod="kube-system/storage-provisioner"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.086899    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q78p\" (UniqueName: \"kubernetes.io/projected/96a4d309-cf31-43d0-8c93-6c924a7f1647-kube-api-access-8q78p\") pod \"storage-provisioner\" (UID: \"96a4d309-cf31-43d0-8c93-6c924a7f1647\") " pod="kube-system/storage-provisioner"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.087042    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc2ce49-615f-42cc-bd9d-806000e42928-config-volume\") pod \"coredns-7d764666f9-5tpb8\" (UID: \"fbc2ce49-615f-42cc-bd9d-806000e42928\") " pod="kube-system/coredns-7d764666f9-5tpb8"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: E1210 23:04:33.928736    2211 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5tpb8" containerName="coredns"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.941371    2211 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5tpb8" podStartSLOduration=13.94135329 podStartE2EDuration="13.94135329s" podCreationTimestamp="2025-12-10 23:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:33.941315734 +0000 UTC m=+19.161719968" watchObservedRunningTime="2025-12-10 23:04:33.94135329 +0000 UTC m=+19.161757524"
	Dec 10 23:04:33 no-preload-092439 kubelet[2211]: I1210 23:04:33.962245    2211 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.962194134 podStartE2EDuration="12.962194134s" podCreationTimestamp="2025-12-10 23:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:04:33.951226695 +0000 UTC m=+19.171630929" watchObservedRunningTime="2025-12-10 23:04:33.962194134 +0000 UTC m=+19.182598368"
	Dec 10 23:04:34 no-preload-092439 kubelet[2211]: E1210 23:04:34.933982    2211 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5tpb8" containerName="coredns"
	Dec 10 23:04:35 no-preload-092439 kubelet[2211]: I1210 23:04:35.905900    2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4r2\" (UniqueName: \"kubernetes.io/projected/dd3bcee3-92a1-4c68-8569-badd5445456f-kube-api-access-zp4r2\") pod \"busybox\" (UID: \"dd3bcee3-92a1-4c68-8569-badd5445456f\") " pod="default/busybox"
	Dec 10 23:04:35 no-preload-092439 kubelet[2211]: E1210 23:04:35.936073    2211 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5tpb8" containerName="coredns"
	Dec 10 23:04:37 no-preload-092439 kubelet[2211]: I1210 23:04:37.953275    2211 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6397336930000002 podStartE2EDuration="2.953257458s" podCreationTimestamp="2025-12-10 23:04:35 +0000 UTC" firstStartedPulling="2025-12-10 23:04:36.125270933 +0000 UTC m=+21.345675158" lastFinishedPulling="2025-12-10 23:04:37.438794707 +0000 UTC m=+22.659198923" observedRunningTime="2025-12-10 23:04:37.953242316 +0000 UTC m=+23.173646550" watchObservedRunningTime="2025-12-10 23:04:37.953257458 +0000 UTC m=+23.173661690"
	
	
	==> storage-provisioner [4c8d77e923b7709d13ba31b586c2832a1a0711bd25eae1284abec59429da3588] <==
	I1210 23:04:33.374513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:04:33.383367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:04:33.383475       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:04:33.385995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:33.393829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:04:33.393986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:04:33.394147       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-092439_e61d8879-db4a-47ad-9e1d-a906bc0e4988!
	I1210 23:04:33.394149       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e18e7f7-4d20-4032-b0ec-4af7afe85afe", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-092439_e61d8879-db4a-47ad-9e1d-a906bc0e4988 became leader
	W1210 23:04:33.397729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:33.402906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:04:33.494578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-092439_e61d8879-db4a-47ad-9e1d-a906bc0e4988!
	W1210 23:04:35.406356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:35.411302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:37.414311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:37.417634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:39.421324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:39.425000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:41.428545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:41.432965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:43.436670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:43.441710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:45.445105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:04:45.451689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-092439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-280530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-280530 --alsologtostderr -v=1: exit status 80 (2.398634074s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-280530 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:05:52.240253  285548 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:05:52.240354  285548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:52.240368  285548 out.go:374] Setting ErrFile to fd 2...
	I1210 23:05:52.240373  285548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:52.240556  285548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:05:52.240807  285548 out.go:368] Setting JSON to false
	I1210 23:05:52.240827  285548 mustload.go:66] Loading cluster: old-k8s-version-280530
	I1210 23:05:52.241200  285548 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:05:52.241703  285548 cli_runner.go:164] Run: docker container inspect old-k8s-version-280530 --format={{.State.Status}}
	I1210 23:05:52.258981  285548 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:05:52.259286  285548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:52.315381  285548 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 23:05:52.305478829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:52.316133  285548 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:old-k8s-version-280530 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=fals
e) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:05:52.318217  285548 out.go:179] * Pausing node old-k8s-version-280530 ... 
	I1210 23:05:52.319867  285548 host.go:66] Checking if "old-k8s-version-280530" exists ...
	I1210 23:05:52.320117  285548 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:52.320154  285548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-280530
	I1210 23:05:52.338310  285548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/old-k8s-version-280530/id_rsa Username:docker}
	I1210 23:05:52.437271  285548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:52.466300  285548 pause.go:52] kubelet running: true
	I1210 23:05:52.466388  285548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:52.626690  285548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:52.626780  285548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:52.694906  285548 cri.go:89] found id: "530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8"
	I1210 23:05:52.694926  285548 cri.go:89] found id: "0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba"
	I1210 23:05:52.694930  285548 cri.go:89] found id: "863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	I1210 23:05:52.694933  285548 cri.go:89] found id: "ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c"
	I1210 23:05:52.694937  285548 cri.go:89] found id: "d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f"
	I1210 23:05:52.694940  285548 cri.go:89] found id: "f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448"
	I1210 23:05:52.694943  285548 cri.go:89] found id: "eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf"
	I1210 23:05:52.694945  285548 cri.go:89] found id: "90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827"
	I1210 23:05:52.694948  285548 cri.go:89] found id: "ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756"
	I1210 23:05:52.694953  285548 cri.go:89] found id: "981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	I1210 23:05:52.694956  285548 cri.go:89] found id: "016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9"
	I1210 23:05:52.694959  285548 cri.go:89] found id: ""
	I1210 23:05:52.694997  285548 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:52.707506  285548 retry.go:31] will retry after 211.572392ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:52Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:05:52.919995  285548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:52.933112  285548 pause.go:52] kubelet running: false
	I1210 23:05:52.933165  285548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:53.094623  285548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:53.094727  285548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:53.160863  285548 cri.go:89] found id: "530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8"
	I1210 23:05:53.160880  285548 cri.go:89] found id: "0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba"
	I1210 23:05:53.160884  285548 cri.go:89] found id: "863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	I1210 23:05:53.160887  285548 cri.go:89] found id: "ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c"
	I1210 23:05:53.160890  285548 cri.go:89] found id: "d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f"
	I1210 23:05:53.160893  285548 cri.go:89] found id: "f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448"
	I1210 23:05:53.160907  285548 cri.go:89] found id: "eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf"
	I1210 23:05:53.160909  285548 cri.go:89] found id: "90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827"
	I1210 23:05:53.160912  285548 cri.go:89] found id: "ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756"
	I1210 23:05:53.160918  285548 cri.go:89] found id: "981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	I1210 23:05:53.160921  285548 cri.go:89] found id: "016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9"
	I1210 23:05:53.160924  285548 cri.go:89] found id: ""
	I1210 23:05:53.160978  285548 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:53.172638  285548 retry.go:31] will retry after 229.100619ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:53Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:05:53.402158  285548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:53.415342  285548 pause.go:52] kubelet running: false
	I1210 23:05:53.415395  285548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:53.558312  285548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:53.558392  285548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:53.624687  285548 cri.go:89] found id: "530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8"
	I1210 23:05:53.624712  285548 cri.go:89] found id: "0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba"
	I1210 23:05:53.624718  285548 cri.go:89] found id: "863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	I1210 23:05:53.624723  285548 cri.go:89] found id: "ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c"
	I1210 23:05:53.624728  285548 cri.go:89] found id: "d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f"
	I1210 23:05:53.624732  285548 cri.go:89] found id: "f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448"
	I1210 23:05:53.624736  285548 cri.go:89] found id: "eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf"
	I1210 23:05:53.624739  285548 cri.go:89] found id: "90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827"
	I1210 23:05:53.624742  285548 cri.go:89] found id: "ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756"
	I1210 23:05:53.624747  285548 cri.go:89] found id: "981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	I1210 23:05:53.624750  285548 cri.go:89] found id: "016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9"
	I1210 23:05:53.624752  285548 cri.go:89] found id: ""
	I1210 23:05:53.624788  285548 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:53.637639  285548 retry.go:31] will retry after 694.495214ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:53Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:05:54.332510  285548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:54.347205  285548 pause.go:52] kubelet running: false
	I1210 23:05:54.347254  285548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:54.494604  285548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:54.494690  285548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:54.559540  285548 cri.go:89] found id: "530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8"
	I1210 23:05:54.559558  285548 cri.go:89] found id: "0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba"
	I1210 23:05:54.559562  285548 cri.go:89] found id: "863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	I1210 23:05:54.559580  285548 cri.go:89] found id: "ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c"
	I1210 23:05:54.559583  285548 cri.go:89] found id: "d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f"
	I1210 23:05:54.559586  285548 cri.go:89] found id: "f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448"
	I1210 23:05:54.559589  285548 cri.go:89] found id: "eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf"
	I1210 23:05:54.559592  285548 cri.go:89] found id: "90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827"
	I1210 23:05:54.559595  285548 cri.go:89] found id: "ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756"
	I1210 23:05:54.559604  285548 cri.go:89] found id: "981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	I1210 23:05:54.559608  285548 cri.go:89] found id: "016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9"
	I1210 23:05:54.559612  285548 cri.go:89] found id: ""
	I1210 23:05:54.559676  285548 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:54.573452  285548 out.go:203] 
	W1210 23:05:54.574667  285548 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:05:54.574685  285548 out.go:285] * 
	* 
	W1210 23:05:54.579065  285548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:05:54.580346  285548 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-280530 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-280530
helpers_test.go:244: (dbg) docker inspect old-k8s-version-280530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	        "Created": "2025-12-10T23:03:39.731784379Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:04:51.045180808Z",
	            "FinishedAt": "2025-12-10T23:04:50.162497522Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hostname",
	        "HostsPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hosts",
	        "LogPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e-json.log",
	        "Name": "/old-k8s-version-280530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-280530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-280530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	                "LowerDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-280530",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-280530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-280530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2b5c98ebb2ccd11bef39099a68fe01bf15001b8c90c8508a54c1c4a25396700",
	            "SandboxKey": "/var/run/docker/netns/a2b5c98ebb2c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-280530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a08a4bae7c4413ec6f525605767e6d6cb6a704250cf4124a75f3ad968a97154c",
	                    "EndpointID": "1f11f6fcdfc888e05e6d26bd7f6eab10cb4e92530ffd8b05ede786c891192815",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "26:4b:be:a2:cd:c5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-280530",
	                        "733a37f892c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530: exit status 2 (319.284288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25: (1.115647461s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-725815                                                                                                                                                                                                                  │ force-systemd-flag-725815    │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ stop    │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                  │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                     │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                               │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                               │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:05:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:05:21.315417  279952 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:05:21.315552  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315558  279952 out.go:374] Setting ErrFile to fd 2...
	I1210 23:05:21.315563  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315908  279952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:05:21.316533  279952 out.go:368] Setting JSON to false
	I1210 23:05:21.318152  279952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2863,"bootTime":1765405058,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:05:21.318230  279952 start.go:143] virtualization: kvm guest
	I1210 23:05:21.321680  279952 out.go:179] * [default-k8s-diff-port-443884] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:05:21.323296  279952 notify.go:221] Checking for updates...
	I1210 23:05:21.323311  279952 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:05:21.325578  279952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:05:21.327595  279952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:21.329578  279952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:05:21.331385  279952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:05:21.333078  279952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:05:21.335474  279952 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:21.335731  279952 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:05:21.336011  279952 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:05:21.336212  279952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:05:21.377288  279952 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:05:21.377534  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.465505  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.452703979 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.465709  279952 docker.go:319] overlay module found
	I1210 23:05:21.469448  279952 out.go:179] * Using the docker driver based on user configuration
	I1210 23:05:21.471121  279952 start.go:309] selected driver: docker
	I1210 23:05:21.471145  279952 start.go:927] validating driver "docker" against <nil>
	I1210 23:05:21.471160  279952 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:05:21.472520  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.571004  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.553945001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.571242  279952 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:05:21.571571  279952 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:05:21.578337  279952 out.go:179] * Using Docker driver with root privileges
	I1210 23:05:21.580966  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:21.581055  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:21.581069  279952 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:05:21.581180  279952 start.go:353] cluster config:
	{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:21.582782  279952 out.go:179] * Starting "default-k8s-diff-port-443884" primary control-plane node in "default-k8s-diff-port-443884" cluster
	I1210 23:05:21.585021  279952 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:05:21.587372  279952 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:05:21.589118  279952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:05:21.589144  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:21.589177  279952 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:05:21.589190  279952 cache.go:65] Caching tarball of preloaded images
	I1210 23:05:21.589295  279952 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:05:21.589311  279952 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:05:21.589446  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:21.589476  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json: {Name:mkf6ccf560ea7c2158ea0ed416f5c6dd51668fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:21.620171  279952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:05:21.620196  279952 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:05:21.620212  279952 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:05:21.620250  279952 start.go:360] acquireMachinesLock for default-k8s-diff-port-443884: {Name:mk4710330ecf7371e663f4e39eab0b9ebe0090d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:05:21.620352  279952 start.go:364] duration metric: took 82.7µs to acquireMachinesLock for "default-k8s-diff-port-443884"
	I1210 23:05:21.620381  279952 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-44
3884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:21.620476  279952 start.go:125] createHost starting for "" (driver="docker")
	W1210 23:05:20.835197  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:23.334201  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:20.213276  278136 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-468067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (5.160420694s)
	I1210 23:05:20.213311  278136 kic.go:203] duration metric: took 5.160581371s to extract preloaded images to volume ...
	W1210 23:05:20.213421  278136 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:20.213458  278136 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:20.213628  278136 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:20.306959  278136 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-468067 --name embed-certs-468067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-468067 --network embed-certs-468067 --ip 192.168.103.2 --volume embed-certs-468067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:21.298889  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Running}}
	I1210 23:05:21.328925  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.361796  278136 cli_runner.go:164] Run: docker exec embed-certs-468067 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:21.435264  278136 oci.go:144] the created container "embed-certs-468067" has a running status.
	I1210 23:05:21.435296  278136 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa...
	I1210 23:05:21.554156  278136 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:21.588772  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.612161  278136 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:21.612185  278136 kic_runner.go:114] Args: [docker exec --privileged embed-certs-468067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:21.675540  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.696943  278136 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:21.697041  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:21.727545  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:21.728127  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:21.728218  278136 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:21.729164  278136 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59570->127.0.0.1:33079: read: connection reset by peer
	W1210 23:05:22.527416  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:25.026352  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:21.623805  279952 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:05:21.624881  279952 start.go:159] libmachine.API.Create for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:21.624987  279952 client.go:173] LocalClient.Create starting
	I1210 23:05:21.625096  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:05:21.625190  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625214  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625283  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:05:21.625309  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625323  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625872  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:05:21.655788  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:05:21.655978  279952 network_create.go:284] running [docker network inspect default-k8s-diff-port-443884] to gather additional debugging logs...
	I1210 23:05:21.656086  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884
	W1210 23:05:21.679674  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 returned with exit code 1
	I1210 23:05:21.679708  279952 network_create.go:287] error running [docker network inspect default-k8s-diff-port-443884]: docker network inspect default-k8s-diff-port-443884: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-443884 not found
	I1210 23:05:21.679724  279952 network_create.go:289] output of [docker network inspect default-k8s-diff-port-443884]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-443884 not found
	
	** /stderr **
	I1210 23:05:21.679849  279952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:21.703214  279952 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:05:21.704277  279952 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:05:21.705309  279952 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:05:21.706496  279952 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001da1570}
	I1210 23:05:21.706530  279952 network_create.go:124] attempt to create docker network default-k8s-diff-port-443884 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 23:05:21.706582  279952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 default-k8s-diff-port-443884
	I1210 23:05:21.819320  279952 network_create.go:108] docker network default-k8s-diff-port-443884 192.168.76.0/24 created
	I1210 23:05:21.819379  279952 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-443884" container
	I1210 23:05:21.819492  279952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:05:21.839558  279952 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-443884 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:05:21.889515  279952 oci.go:103] Successfully created a docker volume default-k8s-diff-port-443884
	I1210 23:05:21.889621  279952 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-443884-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --entrypoint /usr/bin/test -v default-k8s-diff-port-443884:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:05:22.589872  279952 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-443884
	I1210 23:05:22.589953  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:22.589971  279952 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:05:22.590062  279952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:05:24.880730  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:24.880753  278136 ubuntu.go:182] provisioning hostname "embed-certs-468067"
	I1210 23:05:24.880818  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:24.901219  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:24.901446  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:24.901460  278136 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-468067 && echo "embed-certs-468067" | sudo tee /etc/hostname
	I1210 23:05:25.065733  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:25.065811  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.085124  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.085344  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.085361  278136 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:25.220604  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:25.220634  278136 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:25.220666  278136 ubuntu.go:190] setting up certificates
	I1210 23:05:25.220677  278136 provision.go:84] configureAuth start
	I1210 23:05:25.220737  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:25.241192  278136 provision.go:143] copyHostCerts
	I1210 23:05:25.241268  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:25.241284  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:25.241383  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:25.241538  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:25.241555  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:25.241600  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:25.241727  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:25.241740  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:25.241788  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:25.241886  278136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468067 san=[127.0.0.1 192.168.103.2 embed-certs-468067 localhost minikube]
	I1210 23:05:25.496542  278136 provision.go:177] copyRemoteCerts
	I1210 23:05:25.496634  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:25.496716  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.514526  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:25.614722  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:25.691594  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:05:25.711435  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:25.733589  278136 provision.go:87] duration metric: took 512.897643ms to configureAuth
	I1210 23:05:25.733724  278136 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:25.733949  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:25.734075  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.754610  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.754957  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.754983  278136 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:26.511482  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:26.511510  278136 machine.go:97] duration metric: took 4.814544284s to provisionDockerMachine
	I1210 23:05:26.511524  278136 client.go:176] duration metric: took 12.277945952s to LocalClient.Create
	I1210 23:05:26.511549  278136 start.go:167] duration metric: took 12.278077155s to libmachine.API.Create "embed-certs-468067"
	I1210 23:05:26.511560  278136 start.go:293] postStartSetup for "embed-certs-468067" (driver="docker")
	I1210 23:05:26.511572  278136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:26.511763  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:26.511852  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.532552  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:26.704820  278136 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:26.709721  278136 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:26.709754  278136 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:26.709769  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:26.709845  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:26.709948  278136 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:26.710085  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:26.721562  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:26.848263  278136 start.go:296] duration metric: took 336.688388ms for postStartSetup
	I1210 23:05:26.848691  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:26.873274  278136 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/config.json ...
	I1210 23:05:26.873610  278136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:26.873692  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.900475  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.006888  278136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:27.012829  278136 start.go:128] duration metric: took 12.782191279s to createHost
	I1210 23:05:27.012864  278136 start.go:83] releasing machines lock for "embed-certs-468067", held for 12.782341389s
	I1210 23:05:27.012933  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:27.036898  278136 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:27.036959  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.036970  278136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:27.037076  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.060167  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.060474  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.162188  278136 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:27.226209  278136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:27.275765  278136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:27.281847  278136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:27.281930  278136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:27.318410  278136 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:27.318440  278136 start.go:496] detecting cgroup driver to use...
	I1210 23:05:27.318475  278136 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:27.318526  278136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:27.343038  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:27.364315  278136 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:27.364384  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:27.389787  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:27.413856  278136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:27.541797  278136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:27.670940  278136 docker.go:234] disabling docker service ...
	I1210 23:05:27.671031  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:27.697315  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:27.716184  278136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:27.850931  278136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:27.981061  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:27.996218  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:28.014155  278136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:28.014219  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.051730  278136 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:28.051784  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.065018  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.103431  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.116352  278136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:28.126426  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.145779  278136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.179941  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.228512  278136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:28.238742  278136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:28.248400  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.341055  278136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:28.494660  278136 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:28.494733  278136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:28.499231  278136 start.go:564] Will wait 60s for crictl version
	I1210 23:05:28.499291  278136 ssh_runner.go:195] Run: which crictl
	I1210 23:05:28.503669  278136 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:28.532177  278136 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:28.532269  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.561587  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.592747  278136 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:25.371310  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:27.842945  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:28.594020  278136 cli_runner.go:164] Run: docker network inspect embed-certs-468067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:28.612293  278136 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:28.616598  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.627201  278136 kubeadm.go:884] updating cluster {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:28.627316  278136 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:28.627367  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.661883  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.661902  278136 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:28.661944  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.687014  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.687034  278136 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:28.687041  278136 kubeadm.go:935] updating node { 192.168.103.2  8443 v1.34.2 crio true true} ...
	I1210 23:05:28.687129  278136 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-468067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:28.687190  278136 ssh_runner.go:195] Run: crio config
	I1210 23:05:28.733943  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:28.733974  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:28.733996  278136 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:28.734025  278136 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468067 NodeName:embed-certs-468067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:28.734178  278136 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-468067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:28.734252  278136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:28.742810  278136 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:28.742874  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:28.751108  278136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1210 23:05:28.763770  278136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:28.779326  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1210 23:05:28.792419  278136 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:28.796143  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.806368  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.886347  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:28.915355  278136 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067 for IP: 192.168.103.2
	I1210 23:05:28.915375  278136 certs.go:195] generating shared ca certs ...
	I1210 23:05:28.915391  278136 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:28.915538  278136 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:28.915578  278136 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:28.915589  278136 certs.go:257] generating profile certs ...
	I1210 23:05:28.915662  278136 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key
	I1210 23:05:28.915683  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt with IP's: []
	I1210 23:05:29.071762  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt ...
	I1210 23:05:29.071790  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt: {Name:mke0e555380504e9132d2137e7e3455acb66a23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.071961  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key ...
	I1210 23:05:29.071972  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key: {Name:mkade729adab8303334fe37f8122b250a832c9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.072045  278136 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675
	I1210 23:05:29.072062  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 23:05:29.182555  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 ...
	I1210 23:05:29.182578  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675: {Name:mk79dcee6a7b68243255d08226f8c8ea8df6f017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182744  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 ...
	I1210 23:05:29.182757  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675: {Name:mk10df82a762ea271844528df46692c222a8362f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182829  278136 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt
	I1210 23:05:29.182918  278136 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key
	I1210 23:05:29.182985  278136 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key
	I1210 23:05:29.183000  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt with IP's: []
	I1210 23:05:29.307119  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt ...
	I1210 23:05:29.307141  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt: {Name:mk79ff9e69db8cc3194e716f102e712e2d4d77b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307307  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key ...
	I1210 23:05:29.307320  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key: {Name:mk9ba245274e937db4839af0f85390a9d76968ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307534  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:29.307573  278136 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:29.307584  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:29.307609  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:29.307633  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:29.307667  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:29.307708  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:29.308231  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:29.327101  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:29.346183  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:29.364478  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:29.382184  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 23:05:29.399389  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:05:29.416638  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:29.433809  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:29.452092  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:29.472758  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:29.490967  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:29.509406  278136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:29.522774  278136 ssh_runner.go:195] Run: openssl version
	I1210 23:05:29.529665  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.537656  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:29.545565  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549586  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549666  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.584765  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:29.592832  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:29.600987  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.608754  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:29.616631  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620437  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620484  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.655679  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:29.664002  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:29.672120  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.681216  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:29.689857  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693709  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693766  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.731507  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.739594  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.747821  278136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:29.751615  278136 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:29.751683  278136 kubeadm.go:401] StartCluster: {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:29.751761  278136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:29.751831  278136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:29.777853  278136 cri.go:89] found id: ""
	I1210 23:05:29.777925  278136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:29.786216  278136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:29.794212  278136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:29.794263  278136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:29.801953  278136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:29.801970  278136 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:29.802006  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:05:29.809495  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:29.809549  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:29.817210  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:05:29.825100  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:29.825166  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:29.833323  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.841242  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:29.841302  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.848731  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:05:29.856766  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:29.856814  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:29.865300  278136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:29.902403  278136 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:29.902454  278136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:29.923349  278136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:29.923458  278136 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:29.923512  278136 kubeadm.go:319] OS: Linux
	I1210 23:05:29.923562  278136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:29.923628  278136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:29.923714  278136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:29.923819  278136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:29.923903  278136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:29.923977  278136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:29.924051  278136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:29.924101  278136 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:29.981605  278136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:29.981771  278136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:29.981894  278136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:29.988919  278136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:27.027050  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:29.526193  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:26.862824  279952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.272703863s)
	I1210 23:05:26.862856  279952 kic.go:203] duration metric: took 4.272881051s to extract preloaded images to volume ...
	W1210 23:05:26.862949  279952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:26.862995  279952 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:26.863041  279952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:26.938446  279952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-443884 --name default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --network default-k8s-diff-port-443884 --ip 192.168.76.2 --volume default-k8s-diff-port-443884:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:27.537953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Running}}
	I1210 23:05:27.562632  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.593817  279952 cli_runner.go:164] Run: docker exec default-k8s-diff-port-443884 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:27.651271  279952 oci.go:144] the created container "default-k8s-diff-port-443884" has a running status.
	I1210 23:05:27.651311  279952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa...
	I1210 23:05:27.769585  279952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:27.800953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.828718  279952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:27.828741  279952 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-443884 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:27.889900  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.915356  279952 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:27.915454  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:27.951712  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:27.952036  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:27.952052  279952 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:27.952985  279952 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 23:05:31.088959  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.088990  279952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-443884"
	I1210 23:05:31.089070  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.107804  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.108208  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.108239  279952 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-443884 && echo "default-k8s-diff-port-443884" | sudo tee /etc/hostname
	I1210 23:05:31.254706  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.254790  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.273656  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.273937  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.273961  279952 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-443884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-443884/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-443884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:31.409456  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:31.409482  279952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:31.409529  279952 ubuntu.go:190] setting up certificates
	I1210 23:05:31.409548  279952 provision.go:84] configureAuth start
	I1210 23:05:31.409602  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:31.427336  279952 provision.go:143] copyHostCerts
	I1210 23:05:31.427407  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:31.427418  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:31.427493  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:31.427589  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:31.427598  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:31.427631  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:31.427733  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:31.427742  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:31.427768  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:31.427832  279952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-443884 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-443884 localhost minikube]
	I1210 23:05:31.667347  279952 provision.go:177] copyRemoteCerts
	I1210 23:05:31.667406  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:31.667438  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.686302  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:31.784186  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:31.803562  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 23:05:31.821057  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:31.839727  279952 provision.go:87] duration metric: took 430.167459ms to configureAuth
	I1210 23:05:31.839748  279952 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:31.839920  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:31.840025  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.859548  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.859901  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.859927  279952 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:32.153794  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:32.153821  279952 machine.go:97] duration metric: took 4.238436809s to provisionDockerMachine
	I1210 23:05:32.153835  279952 client.go:176] duration metric: took 10.528837696s to LocalClient.Create
	I1210 23:05:32.153863  279952 start.go:167] duration metric: took 10.528985188s to libmachine.API.Create "default-k8s-diff-port-443884"
	I1210 23:05:32.153875  279952 start.go:293] postStartSetup for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:32.153889  279952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:32.153949  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:32.153985  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.171730  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.270740  279952 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:32.274281  279952 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:32.274307  279952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:32.274319  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:32.274371  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:32.274450  279952 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:32.274542  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:32.282079  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:32.302413  279952 start.go:296] duration metric: took 148.520167ms for postStartSetup
	I1210 23:05:32.302872  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.320682  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:32.321004  279952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:32.321053  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.346274  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.443063  279952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:32.448104  279952 start.go:128] duration metric: took 10.827612732s to createHost
	I1210 23:05:32.448128  279952 start.go:83] releasing machines lock for "default-k8s-diff-port-443884", held for 10.827764504s
	I1210 23:05:32.448198  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.466547  279952 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:32.466597  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.466663  279952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:32.466745  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.486179  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.486510  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.637008  279952 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:32.643974  279952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:32.682605  279952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:32.688290  279952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:32.688368  279952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:32.718783  279952 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:32.718805  279952 start.go:496] detecting cgroup driver to use...
	I1210 23:05:32.718839  279952 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:32.718887  279952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:32.736209  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:32.749128  279952 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:32.749186  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:32.766975  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:32.785140  279952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:32.874331  279952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:32.963222  279952 docker.go:234] disabling docker service ...
	I1210 23:05:32.963291  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:32.982534  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:32.997142  279952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:33.081960  279952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:33.181936  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:33.195465  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:33.210008  279952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:33.210065  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.220700  279952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:33.220765  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.229956  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.239377  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.249068  279952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:33.257305  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.266019  279952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.279712  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.288539  279952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:33.296476  279952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:33.303858  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.389580  279952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:33.538797  279952 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:33.538869  279952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:33.543296  279952 start.go:564] Will wait 60s for crictl version
	I1210 23:05:33.543365  279952 ssh_runner.go:195] Run: which crictl
	I1210 23:05:33.547325  279952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:33.571444  279952 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:33.571514  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.598912  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.630913  279952 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:30.334341  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:32.334430  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:29.991802  278136 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:29.991901  278136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:29.991990  278136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:30.351608  278136 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:30.593176  278136 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:30.755320  278136 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:30.977407  278136 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:31.085043  278136 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:31.085216  278136 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:31.884952  278136 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:31.885114  278136 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:32.128820  278136 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:32.281129  278136 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:33.153677  278136 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:33.153771  278136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:33.283014  278136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:33.675630  278136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:33.759625  278136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:33.814126  278136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:34.008745  278136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:34.009454  278136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:34.013938  278136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:05:33.632188  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:33.650548  279952 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:33.654778  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.665335  279952 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:33.665471  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:33.665522  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.699300  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.699325  279952 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:33.699383  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.725754  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.725775  279952 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:33.725784  279952 kubeadm.go:935] updating node { 192.168.76.2  8444 v1.34.2 crio true true} ...
	I1210 23:05:33.725879  279952 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-443884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:33.725958  279952 ssh_runner.go:195] Run: crio config
	I1210 23:05:33.773897  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:33.773919  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:33.773933  279952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:33.773952  279952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-443884 NodeName:default-k8s-diff-port-443884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:33.774070  279952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-443884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:33.774129  279952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:33.782558  279952 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:33.782623  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:33.790780  279952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 23:05:33.803922  279952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:33.819325  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 23:05:33.833524  279952 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:33.837539  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.847973  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.932121  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:33.960425  279952 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884 for IP: 192.168.76.2
	I1210 23:05:33.960443  279952 certs.go:195] generating shared ca certs ...
	I1210 23:05:33.960462  279952 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:33.960630  279952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:33.960704  279952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:33.960718  279952 certs.go:257] generating profile certs ...
	I1210 23:05:33.960792  279952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key
	I1210 23:05:33.960817  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt with IP's: []
	I1210 23:05:34.057077  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt ...
	I1210 23:05:34.057105  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt: {Name:mk51847952dee09af95f401b00c827a06f5160a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057270  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key ...
	I1210 23:05:34.057282  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key: {Name:mkf375f3b6a63380e9965a3cb09d66e6ff1b51cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057361  279952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94
	I1210 23:05:34.057384  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 23:05:34.136636  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 ...
	I1210 23:05:34.136676  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94: {Name:mk002a91b8c9f2fb4b46891974129537a6ecfc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136847  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 ...
	I1210 23:05:34.136862  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94: {Name:mkd3d0eff1194b75939303cc097dff6606b0b6c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136933  279952 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt
	I1210 23:05:34.137006  279952 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key
	I1210 23:05:34.137066  279952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key
	I1210 23:05:34.137081  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt with IP's: []
	I1210 23:05:34.220084  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt ...
	I1210 23:05:34.220108  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt: {Name:mka111ca179d41320378687d39fe32a1ab401271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220284  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key ...
	I1210 23:05:34.220298  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key: {Name:mkfd978f51ccbb0329e7bc88cc26a4c2dc6d8abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220523  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:34.220562  279952 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:34.220573  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:34.220597  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:34.220621  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:34.220659  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:34.220724  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:34.221261  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:34.240495  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:34.260518  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:34.278207  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:34.295549  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 23:05:34.313819  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:05:34.332779  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:34.351978  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:34.369453  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:34.389088  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:34.406689  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:34.423900  279952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:34.436918  279952 ssh_runner.go:195] Run: openssl version
	I1210 23:05:34.443077  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.451518  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:34.459429  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463331  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463387  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.498849  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:34.506923  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:34.514672  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.522328  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:34.530594  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534511  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534565  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.569396  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.577310  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.585012  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.592934  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:34.600629  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604461  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604515  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.639297  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:34.647330  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:34.655251  279952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:34.659028  279952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:34.659086  279952 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:34.659172  279952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:34.659239  279952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:34.690714  279952 cri.go:89] found id: ""
	I1210 23:05:34.690785  279952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:34.699614  279952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:34.709093  279952 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:34.709144  279952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:34.717328  279952 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:34.717359  279952 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:34.717405  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 23:05:34.725308  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:34.725366  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:34.733106  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 23:05:34.741129  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:34.741182  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:34.749178  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.757226  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:34.757275  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.764816  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 23:05:34.772969  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:34.773022  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:34.781188  279952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:34.830362  279952 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:34.830437  279952 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:34.853117  279952 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:34.853190  279952 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:34.853230  279952 kubeadm.go:319] OS: Linux
	I1210 23:05:34.853297  279952 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:34.853373  279952 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:34.853416  279952 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:34.853458  279952 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:34.853513  279952 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:34.853553  279952 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:34.853661  279952 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:34.853730  279952 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:34.917131  279952 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:34.917280  279952 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:34.917435  279952 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:34.924504  279952 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:31.528280  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:34.026219  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:34.926960  279952 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:34.927084  279952 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:34.927196  279952 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:35.403022  279952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:35.705371  279952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:36.157799  279952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:34.016209  278136 out.go:252]   - Booting up control plane ...
	I1210 23:05:34.016326  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:34.016435  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:34.017554  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:34.032908  278136 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:34.033076  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:34.040913  278136 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:34.041222  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:34.041310  278136 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:34.147564  278136 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:34.147726  278136 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:35.148682  278136 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001186459s
	I1210 23:05:35.151592  278136 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:35.151727  278136 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1210 23:05:35.151852  278136 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:35.151961  278136 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:05:37.115948  278136 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.964278263s
	I1210 23:05:37.326345  278136 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.174576485s
	I1210 23:05:38.653088  278136 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501379838s
	I1210 23:05:38.672660  278136 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:38.682162  278136 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:38.691627  278136 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:38.691817  278136 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-468067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:38.699476  278136 kubeadm.go:319] [bootstrap-token] Using token: vc7tt6.1ma2zdzjremls6oi
	I1210 23:05:36.394195  279952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:36.699432  279952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:36.699668  279952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:36.853566  279952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:36.853729  279952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:37.237894  279952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:37.887346  279952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:38.035256  279952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:38.035414  279952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:38.131597  279952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:38.206508  279952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:38.262108  279952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:38.568290  279952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:38.740049  279952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:38.740793  279952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:38.744608  279952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1210 23:05:34.335263  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:36.833884  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:38.834469  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:38.701150  278136 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:38.701295  278136 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:38.704803  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:38.709973  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:38.712391  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:38.714770  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:38.717330  278136 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:39.059930  278136 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:39.476535  278136 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:40.059845  278136 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:40.060904  278136 kubeadm.go:319] 
	I1210 23:05:40.061003  278136 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:40.061040  278136 kubeadm.go:319] 
	I1210 23:05:40.061181  278136 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:40.061199  278136 kubeadm.go:319] 
	I1210 23:05:40.061232  278136 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:40.061318  278136 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:40.061392  278136 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:40.061401  278136 kubeadm.go:319] 
	I1210 23:05:40.061493  278136 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:40.061510  278136 kubeadm.go:319] 
	I1210 23:05:40.061577  278136 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:40.061588  278136 kubeadm.go:319] 
	I1210 23:05:40.061670  278136 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:40.061826  278136 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:40.061923  278136 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:40.061933  278136 kubeadm.go:319] 
	I1210 23:05:40.062072  278136 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:40.062192  278136 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:40.062214  278136 kubeadm.go:319] 
	I1210 23:05:40.062308  278136 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062443  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:40.062470  278136 kubeadm.go:319] 	--control-plane 
	I1210 23:05:40.062478  278136 kubeadm.go:319] 
	I1210 23:05:40.062582  278136 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:40.062591  278136 kubeadm.go:319] 
	I1210 23:05:40.062719  278136 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062828  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:40.065627  278136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:40.065833  278136 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:40.065868  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:40.065881  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:40.067426  278136 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1210 23:05:36.028634  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:38.526674  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:39.026394  270470 pod_ready.go:94] pod "coredns-5dd5756b68-6mzkn" is "Ready"
	I1210 23:05:39.026418  270470 pod_ready.go:86] duration metric: took 37.006112476s for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.029141  270470 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.032878  270470 pod_ready.go:94] pod "etcd-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.032895  270470 pod_ready.go:86] duration metric: took 3.736841ms for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.035267  270470 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.039084  270470 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.039100  270470 pod_ready.go:86] duration metric: took 3.817017ms for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.041365  270470 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.224222  270470 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.224250  270470 pod_ready.go:86] duration metric: took 182.867637ms for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.425713  270470 pod_ready.go:83] waiting for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.824129  270470 pod_ready.go:94] pod "kube-proxy-nvgl4" is "Ready"
	I1210 23:05:39.824155  270470 pod_ready.go:86] duration metric: took 398.41578ms for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.025046  270470 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.424982  270470 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-280530" is "Ready"
	I1210 23:05:40.425010  270470 pod_ready.go:86] duration metric: took 399.940018ms for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.425028  270470 pod_ready.go:40] duration metric: took 38.409041474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:40.471271  270470 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 23:05:40.472796  270470 out.go:203] 
	W1210 23:05:40.474173  270470 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 23:05:40.475227  270470 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 23:05:40.476535  270470 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-280530" cluster and "default" namespace by default
	I1210 23:05:38.745963  279952 out.go:252]   - Booting up control plane ...
	I1210 23:05:38.746105  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:38.746206  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:38.747825  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:38.762756  279952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:38.762924  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:38.769442  279952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:38.769622  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:38.769715  279952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:38.869128  279952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:38.869246  279952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:40.369850  279952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500867942s
	I1210 23:05:40.374332  279952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:40.374482  279952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1210 23:05:40.374711  279952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:40.374834  279952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 23:05:40.835431  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:43.334516  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:40.068553  278136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:40.073284  278136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:40.073306  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:40.091013  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:40.303352  278136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:40.303417  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.303441  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-468067 minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=embed-certs-468067 minikube.k8s.io/primary=true
	I1210 23:05:40.313293  278136 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:40.378089  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.878855  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.378845  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.878906  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.378433  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.878834  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.378962  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.879108  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.393467  279952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.018399943s
	I1210 23:05:42.394503  279952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.020217138s
	I1210 23:05:44.376449  279952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002089254s
	I1210 23:05:44.394198  279952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:44.405702  279952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:44.416487  279952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:44.416805  279952 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-443884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:44.426438  279952 kubeadm.go:319] [bootstrap-token] Using token: bdnp9h.to2dgl31xr9dkwz5
	I1210 23:05:44.379177  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:44.878480  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.378914  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.449851  278136 kubeadm.go:1114] duration metric: took 5.14650104s to wait for elevateKubeSystemPrivileges
	I1210 23:05:45.449886  278136 kubeadm.go:403] duration metric: took 15.698207011s to StartCluster
	I1210 23:05:45.450011  278136 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.450102  278136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:45.452199  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.452484  278136 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:45.452632  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:45.453102  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:45.453099  278136 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:45.453199  278136 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-468067"
	I1210 23:05:45.453231  278136 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-468067"
	I1210 23:05:45.453261  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.453287  278136 addons.go:70] Setting default-storageclass=true in profile "embed-certs-468067"
	I1210 23:05:45.453309  278136 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-468067"
	I1210 23:05:45.453723  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454265  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454717  278136 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:45.457422  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:45.486553  278136 addons.go:239] Setting addon default-storageclass=true in "embed-certs-468067"
	I1210 23:05:45.486718  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.487325  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.490135  278136 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:44.428776  279952 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:44.428945  279952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:44.431774  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:44.437409  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:44.441061  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:44.443828  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:44.447026  279952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:44.782438  279952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:45.200076  279952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:45.782497  279952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:45.783786  279952 kubeadm.go:319] 
	I1210 23:05:45.783890  279952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:45.783902  279952 kubeadm.go:319] 
	I1210 23:05:45.783990  279952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:45.783998  279952 kubeadm.go:319] 
	I1210 23:05:45.784039  279952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:45.784112  279952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:45.784188  279952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:45.784204  279952 kubeadm.go:319] 
	I1210 23:05:45.784312  279952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:45.784331  279952 kubeadm.go:319] 
	I1210 23:05:45.784396  279952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:45.784406  279952 kubeadm.go:319] 
	I1210 23:05:45.784469  279952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:45.784575  279952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:45.784730  279952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:45.784744  279952 kubeadm.go:319] 
	I1210 23:05:45.784874  279952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:45.784977  279952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:45.784989  279952 kubeadm.go:319] 
	I1210 23:05:45.785081  279952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785190  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:45.785217  279952 kubeadm.go:319] 	--control-plane 
	I1210 23:05:45.785226  279952 kubeadm.go:319] 
	I1210 23:05:45.785345  279952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:45.785356  279952 kubeadm.go:319] 
	I1210 23:05:45.785453  279952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785567  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:45.788874  279952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:45.789027  279952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:45.789056  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:45.789085  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:45.790618  279952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:05:45.492042  278136 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.492059  278136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:45.492115  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.519499  278136 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.519528  278136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:45.519625  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.523139  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.543799  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.561861  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:45.619261  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:45.642303  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.661850  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.731298  278136 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:45.732304  278136 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:05:45.961001  278136 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:45.791839  279952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:45.796562  279952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:45.796582  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:45.811119  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:46.030619  279952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:46.030699  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:46.030765  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-443884 minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=default-k8s-diff-port-443884 minikube.k8s.io/primary=true
	I1210 23:05:46.041384  279952 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:46.113000  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1210 23:05:45.334950  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:45.834730  273929 pod_ready.go:94] pod "coredns-7d764666f9-5tpb8" is "Ready"
	I1210 23:05:45.834762  273929 pod_ready.go:86] duration metric: took 31.506416988s for pod "coredns-7d764666f9-5tpb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.837911  273929 pod_ready.go:83] waiting for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.842136  273929 pod_ready.go:94] pod "etcd-no-preload-092439" is "Ready"
	I1210 23:05:45.842157  273929 pod_ready.go:86] duration metric: took 4.230953ms for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.845582  273929 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.849432  273929 pod_ready.go:94] pod "kube-apiserver-no-preload-092439" is "Ready"
	I1210 23:05:45.849453  273929 pod_ready.go:86] duration metric: took 3.846386ms for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.851434  273929 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.033192  273929 pod_ready.go:94] pod "kube-controller-manager-no-preload-092439" is "Ready"
	I1210 23:05:46.033224  273929 pod_ready.go:86] duration metric: took 181.767834ms for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.232384  273929 pod_ready.go:83] waiting for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.632419  273929 pod_ready.go:94] pod "kube-proxy-gqz42" is "Ready"
	I1210 23:05:46.632450  273929 pod_ready.go:86] duration metric: took 400.040431ms for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.832502  273929 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232861  273929 pod_ready.go:94] pod "kube-scheduler-no-preload-092439" is "Ready"
	I1210 23:05:47.232892  273929 pod_ready.go:86] duration metric: took 400.366591ms for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232908  273929 pod_ready.go:40] duration metric: took 32.909358343s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:47.280508  273929 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:05:47.281991  273929 out.go:179] * Done! kubectl is now configured to use "no-preload-092439" cluster and "default" namespace by default
	I1210 23:05:45.962276  278136 addons.go:530] duration metric: took 509.17689ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:05:46.235747  278136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-468067" context rescaled to 1 replicas
	W1210 23:05:47.735830  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:46.613910  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.113080  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.613875  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.113232  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.613174  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.113224  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.613917  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.113829  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.613873  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.685511  279952 kubeadm.go:1114] duration metric: took 4.654880357s to wait for elevateKubeSystemPrivileges
	I1210 23:05:50.685559  279952 kubeadm.go:403] duration metric: took 16.026470518s to StartCluster
	I1210 23:05:50.685582  279952 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.685709  279952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:50.687466  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.687720  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:50.687732  279952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:50.687802  279952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:50.687909  279952 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687931  279952 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.687946  279952 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687960  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.687976  279952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-443884"
	I1210 23:05:50.687954  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:50.688332  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.688484  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.689939  279952 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:50.691358  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:50.711362  279952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:50.712612  279952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.712632  279952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:50.712715  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.712803  279952 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.712847  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.713267  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.743749  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.746417  279952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.746441  279952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:50.746494  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.770270  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.774861  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:50.833407  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:50.857046  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.880889  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.958517  279952 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:50.959713  279952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:05:51.167487  279952 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:51.168805  279952 addons.go:530] duration metric: took 481.002504ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1210 23:05:49.737595  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	W1210 23:05:52.236343  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.802075897Z" level=info msg="Created container 016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7/kubernetes-dashboard" id=db95ebd0-49c3-4774-a0dd-8ca9f41f7fd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.803362198Z" level=info msg="Starting container: 016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9" id=7ffadae5-6627-4b28-be7b-9e1aa75f514f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.805417764Z" level=info msg="Started container" PID=1725 containerID=016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7/kubernetes-dashboard id=7ffadae5-6627-4b28-be7b-9e1aa75f514f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed9386104cb56786a48f9a19ae02de367be282587a580a2f4f9120f5bcfac5a5
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.324813858Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dcfe7cbb-599c-41bf-9eaa-153d463a41ed name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.32562719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a33914df-5b42-4ffd-b4a6-b5aacd3228a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.326702288Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4a9ce739-8322-4f8f-b0cf-8f716cbd16b9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.326807563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333025105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333229173Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dc07989e8e74a26d3767eb56ee76505a5f956d244c102b74c0b827fd1e340034/merged/etc/passwd: no such file or directory"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333279117Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dc07989e8e74a26d3767eb56ee76505a5f956d244c102b74c0b827fd1e340034/merged/etc/group: no such file or directory"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333600251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.369471742Z" level=info msg="Created container 530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8: kube-system/storage-provisioner/storage-provisioner" id=4a9ce739-8322-4f8f-b0cf-8f716cbd16b9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.370141389Z" level=info msg="Starting container: 530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8" id=f79475ea-116a-45c2-a9e9-6f73f943edfb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.371957936Z" level=info msg="Started container" PID=1747 containerID=530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8 description=kube-system/storage-provisioner/storage-provisioner id=f79475ea-116a-45c2-a9e9-6f73f943edfb name=/runtime.v1.RuntimeService/StartContainer sandboxID=d50fe54aaa979788f9b8ecb9f93d35f222562665ddcfc70437c4d651a6da2cb9
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.188354431Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dcffc856-7585-4151-8de8-78b28b83eeb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.189435949Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c2de86c-cdf4-430f-9aba-2adfd359db7a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.19057661Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=f2759c30-10ef-4c03-ba1e-a077a24bc6ad name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.19075642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.196713218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.197244818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.237444349Z" level=info msg="Created container 981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=f2759c30-10ef-4c03-ba1e-a077a24bc6ad name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.238264033Z" level=info msg="Starting container: 981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce" id=81649f3f-910f-47c1-8124-5d32e4415c0f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.240696622Z" level=info msg="Started container" PID=1763 containerID=981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper id=81649f3f-910f-47c1-8124-5d32e4415c0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=d389e558ee85cfd75a6fe5311f6b6de3d2dd675badccc871458f6091728f0a33
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.350354893Z" level=info msg="Removing container: 6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71" id=738572a6-39c8-425b-9cbf-6347dd88582a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.36077501Z" level=info msg="Removed container 6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=738572a6-39c8-425b-9cbf-6347dd88582a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	981a583e3f2e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   d389e558ee85c       dashboard-metrics-scraper-5f989dc9cf-2v4wt       kubernetes-dashboard
	530f93f9f6d46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   d50fe54aaa979       storage-provisioner                              kube-system
	016cec5b1f976       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   ed9386104cb56       kubernetes-dashboard-8694d4445c-2ggd7            kubernetes-dashboard
	465f874eaa2a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   73aaefa4027d2       busybox                                          default
	0d3379629b115       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   1a3d1f5324a3f       coredns-5dd5756b68-6mzkn                         kube-system
	863419c5899dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   d50fe54aaa979       storage-provisioner                              kube-system
	ccd3cfa000099       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   32427597c3942       kube-proxy-nvgl4                                 kube-system
	d646de05be7ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   22a6ae08e9855       kindnet-4g5xn                                    kube-system
	f8d3ca1495f06       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   19e1abd2ab941       kube-scheduler-old-k8s-version-280530            kube-system
	eb0a3103a4593       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   afbf2a806db24       kube-controller-manager-old-k8s-version-280530   kube-system
	90f97cb5df33b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   446fd9fe1d6d1       etcd-old-k8s-version-280530                      kube-system
	ecd4ac1e0021e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   ca99bf00166d8       kube-apiserver-old-k8s-version-280530            kube-system
	
	
	==> coredns [0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39849 - 52029 "HINFO IN 6907857196277987391.854058816645061022. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.03589211s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-280530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-280530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=old-k8s-version-280530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_03_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-280530
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:05:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-280530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                467d6f4a-aed3-4ac0-a7b7-07929c2703cf
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-6mzkn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-280530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-4g5xn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-280530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-280530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-nvgl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-280530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2v4wt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2ggd7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-280530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-280530 event: Registered Node old-k8s-version-280530 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-280530 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-280530 event: Registered Node old-k8s-version-280530 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827] <==
	{"level":"info","ts":"2025-12-10T23:04:57.754883Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:04:57.754923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:04:57.756342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T23:04:57.756543Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T23:04:57.756563Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T23:04:57.756667Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T23:04:57.756681Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T23:04:59.544119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.544211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.544222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.54423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.545189Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-280530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T23:04:59.545212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:04:59.545215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:04:59.545398Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T23:04:59.545461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T23:04:59.54646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T23:04:59.546495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-12-10T23:05:18.574915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.589058ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597640007881118 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" mod_revision:564 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" value_size:4090 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:18.575044Z","caller":"traceutil/trace.go:171","msg":"trace[1760717058] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"289.256542ms","start":"2025-12-10T23:05:18.285758Z","end":"2025-12-10T23:05:18.575015Z","steps":["trace[1760717058] 'process raft request'  (duration: 114.974026ms)","trace[1760717058] 'compare'  (duration: 173.443561ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:19.53103Z","caller":"traceutil/trace.go:171","msg":"trace[147593829] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"245.287674ms","start":"2025-12-10T23:05:19.285721Z","end":"2025-12-10T23:05:19.531009Z","steps":["trace[147593829] 'process raft request'  (duration: 245.134786ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.539791Z","caller":"traceutil/trace.go:171","msg":"trace[822923237] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"249.025828ms","start":"2025-12-10T23:05:19.290743Z","end":"2025-12-10T23:05:19.539769Z","steps":["trace[822923237] 'process raft request'  (duration: 248.714068ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:05:55 up 48 min,  0 user,  load average: 4.31, 2.85, 1.84
	Linux old-k8s-version-280530 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f] <==
	I1210 23:05:01.841832       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:01.842133       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:05:01.842291       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:01.842310       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:01.842337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:02.049323       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:02.244151       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:02.244194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:02.245054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:02.544714       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:02.544743       1 metrics.go:72] Registering metrics
	I1210 23:05:02.544848       1 controller.go:711] "Syncing nftables rules"
	I1210 23:05:12.057772       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:12.057820       1 main.go:301] handling current node
	I1210 23:05:22.050761       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:22.050794       1 main.go:301] handling current node
	I1210 23:05:32.049755       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:32.049791       1 main.go:301] handling current node
	I1210 23:05:42.051735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:42.051767       1 main.go:301] handling current node
	I1210 23:05:52.056343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:52.056379       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756] <==
	I1210 23:05:00.703946       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 23:05:00.704497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 23:05:00.705428       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 23:05:00.707035       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1210 23:05:00.708132       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 23:05:00.706203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:05:00.709015       1 aggregator.go:166] initial CRD sync complete...
	I1210 23:05:00.709059       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 23:05:00.709083       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:05:00.709105       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:05:00.705967       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1210 23:05:00.717686       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:05:00.782449       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:00.792083       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1210 23:05:01.609634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:05:01.848994       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 23:05:01.883276       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 23:05:01.906140       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:01.914925       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:01.922337       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 23:05:01.962327       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.241.49"}
	I1210 23:05:01.975736       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.239.158"}
	I1210 23:05:13.593558       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 23:05:13.602700       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 23:05:13.607898       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf] <==
	I1210 23:05:13.655900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.263658ms"
	I1210 23:05:13.669628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.048466ms"
	I1210 23:05:13.669905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="139.259µs"
	I1210 23:05:13.675220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.262696ms"
	I1210 23:05:13.675487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.257µs"
	I1210 23:05:13.675500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.512µs"
	I1210 23:05:13.681847       1 shared_informer.go:318] Caches are synced for attach detach
	I1210 23:05:13.692259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.019µs"
	I1210 23:05:13.711865       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1210 23:05:13.715478       1 shared_informer.go:318] Caches are synced for crt configmap
	I1210 23:05:13.724694       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:05:13.738462       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:05:13.760469       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1210 23:05:14.169488       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:05:14.169526       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 23:05:14.173102       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:05:17.288849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.895µs"
	I1210 23:05:18.576695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.064µs"
	I1210 23:05:19.542266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.558µs"
	I1210 23:05:22.339272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.013663ms"
	I1210 23:05:22.340830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.381431ms"
	I1210 23:05:38.361877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.671µs"
	I1210 23:05:38.744334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.59442ms"
	I1210 23:05:38.744535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.465µs"
	I1210 23:05:43.962797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.457µs"
	
	
	==> kube-proxy [ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c] <==
	I1210 23:05:01.638575       1 server_others.go:69] "Using iptables proxy"
	I1210 23:05:01.657955       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 23:05:01.684261       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:01.688118       1 server_others.go:152] "Using iptables Proxier"
	I1210 23:05:01.688181       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 23:05:01.688204       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 23:05:01.688298       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 23:05:01.688614       1 server.go:846] "Version info" version="v1.28.0"
	I1210 23:05:01.688636       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:01.689854       1 config.go:315] "Starting node config controller"
	I1210 23:05:01.689881       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 23:05:01.690289       1 config.go:188] "Starting service config controller"
	I1210 23:05:01.690299       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 23:05:01.690411       1 config.go:97] "Starting endpoint slice config controller"
	I1210 23:05:01.690460       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 23:05:01.790041       1 shared_informer.go:318] Caches are synced for node config
	I1210 23:05:01.791196       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 23:05:01.791251       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448] <==
	I1210 23:04:58.162091       1 serving.go:348] Generated self-signed cert in-memory
	W1210 23:05:00.668234       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:05:00.668267       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:05:00.668281       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:05:00.668297       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:05:00.703573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 23:05:00.703606       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:00.707947       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:05:00.708045       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 23:05:00.709821       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 23:05:00.709903       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 23:05:00.808810       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.653594     737 topology_manager.go:215] "Topology Admit Handler" podUID="2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762609     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hsx\" (UniqueName: \"kubernetes.io/projected/2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e-kube-api-access-t6hsx\") pod \"kubernetes-dashboard-8694d4445c-2ggd7\" (UID: \"2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762798     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/6f75636b-4d0c-483a-b9cb-c2d761a57b58-kube-api-access-9vt2j\") pod \"dashboard-metrics-scraper-5f989dc9cf-2v4wt\" (UID: \"6f75636b-4d0c-483a-b9cb-c2d761a57b58\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762864     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2ggd7\" (UID: \"2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762905     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6f75636b-4d0c-483a-b9cb-c2d761a57b58-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-2v4wt\" (UID: \"6f75636b-4d0c-483a-b9cb-c2d761a57b58\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt"
	Dec 10 23:05:17 old-k8s-version-280530 kubelet[737]: I1210 23:05:17.273524     737 scope.go:117] "RemoveContainer" containerID="3a05a81bc7efeb73de833b63decc0ef5e0f85a571dd968a44d159525cd62aa5e"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: I1210 23:05:18.277959     737 scope.go:117] "RemoveContainer" containerID="3a05a81bc7efeb73de833b63decc0ef5e0f85a571dd968a44d159525cd62aa5e"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: I1210 23:05:18.278176     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: E1210 23:05:18.278548     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:19 old-k8s-version-280530 kubelet[737]: I1210 23:05:19.282134     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:19 old-k8s-version-280530 kubelet[737]: E1210 23:05:19.282386     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:23 old-k8s-version-280530 kubelet[737]: I1210 23:05:23.952365     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:23 old-k8s-version-280530 kubelet[737]: E1210 23:05:23.952807     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:32 old-k8s-version-280530 kubelet[737]: I1210 23:05:32.324308     737 scope.go:117] "RemoveContainer" containerID="863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	Dec 10 23:05:32 old-k8s-version-280530 kubelet[737]: I1210 23:05:32.339420     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7" podStartSLOduration=11.605826327 podCreationTimestamp="2025-12-10 23:05:13 +0000 UTC" firstStartedPulling="2025-12-10 23:05:13.997024489 +0000 UTC m=+16.906725164" lastFinishedPulling="2025-12-10 23:05:21.7305201 +0000 UTC m=+24.640220772" observedRunningTime="2025-12-10 23:05:22.317974407 +0000 UTC m=+25.227675088" watchObservedRunningTime="2025-12-10 23:05:32.339321935 +0000 UTC m=+35.249022620"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.187540     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.348769     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.348996     737 scope.go:117] "RemoveContainer" containerID="981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: E1210 23:05:38.349396     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:43 old-k8s-version-280530 kubelet[737]: I1210 23:05:43.952225     737 scope.go:117] "RemoveContainer" containerID="981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	Dec 10 23:05:43 old-k8s-version-280530 kubelet[737]: E1210 23:05:43.952512     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: kubelet.service: Consumed 1.636s CPU time.
	
	
	==> kubernetes-dashboard [016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9] <==
	2025/12/10 23:05:21 Using namespace: kubernetes-dashboard
	2025/12/10 23:05:21 Using in-cluster config to connect to apiserver
	2025/12/10 23:05:21 Using secret token for csrf signing
	2025/12/10 23:05:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:05:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:05:21 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 23:05:21 Generating JWE encryption key
	2025/12/10 23:05:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:05:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:05:22 Initializing JWE encryption key from synchronized object
	2025/12/10 23:05:22 Creating in-cluster Sidecar client
	2025/12/10 23:05:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:22 Serving insecurely on HTTP port: 9090
	2025/12/10 23:05:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:21 Starting overwatch
	
	
	==> storage-provisioner [530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8] <==
	I1210 23:05:32.384714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:05:32.393933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:05:32.393976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 23:05:49.791278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:05:49.791440       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d!
	I1210 23:05:49.791425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54431db1-ea80-4659-b536-d1e109546d8c", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d became leader
	I1210 23:05:49.891688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d!
	
	
	==> storage-provisioner [863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91] <==
	I1210 23:05:01.586446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:05:31.591190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280530 -n old-k8s-version-280530: exit status 2 (344.484458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-280530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-280530
helpers_test.go:244: (dbg) docker inspect old-k8s-version-280530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	        "Created": "2025-12-10T23:03:39.731784379Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270748,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:04:51.045180808Z",
	            "FinishedAt": "2025-12-10T23:04:50.162497522Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hostname",
	        "HostsPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/hosts",
	        "LogPath": "/var/lib/docker/containers/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e/733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e-json.log",
	        "Name": "/old-k8s-version-280530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-280530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-280530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "733a37f892c32150d1b0cab8172bef44fb0743a8a2ec1a2e0628aad10babb34e",
	                "LowerDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45538fee29de103ee68ac759632e07410c0f8ab7f1ed06413b919eb8186f81fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-280530",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-280530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-280530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-280530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2b5c98ebb2ccd11bef39099a68fe01bf15001b8c90c8508a54c1c4a25396700",
	            "SandboxKey": "/var/run/docker/netns/a2b5c98ebb2c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-280530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a08a4bae7c4413ec6f525605767e6d6cb6a704250cf4124a75f3ad968a97154c",
	                    "EndpointID": "1f11f6fcdfc888e05e6d26bd7f6eab10cb4e92530ffd8b05ede786c891192815",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "26:4b:be:a2:cd:c5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-280530",
	                        "733a37f892c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530: exit status 2 (321.691532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25
E1210 23:05:57.316633    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-280530 logs -n 25: (1.157858198s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-725815                                                                                                                                                                                                                  │ force-systemd-flag-725815    │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ stop    │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                  │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                     │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                               │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                               │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:05:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:05:21.315417  279952 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:05:21.315552  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315558  279952 out.go:374] Setting ErrFile to fd 2...
	I1210 23:05:21.315563  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315908  279952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:05:21.316533  279952 out.go:368] Setting JSON to false
	I1210 23:05:21.318152  279952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2863,"bootTime":1765405058,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:05:21.318230  279952 start.go:143] virtualization: kvm guest
	I1210 23:05:21.321680  279952 out.go:179] * [default-k8s-diff-port-443884] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:05:21.323296  279952 notify.go:221] Checking for updates...
	I1210 23:05:21.323311  279952 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:05:21.325578  279952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:05:21.327595  279952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:21.329578  279952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:05:21.331385  279952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:05:21.333078  279952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:05:21.335474  279952 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:21.335731  279952 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:05:21.336011  279952 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:05:21.336212  279952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:05:21.377288  279952 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:05:21.377534  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.465505  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.452703979 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.465709  279952 docker.go:319] overlay module found
	I1210 23:05:21.469448  279952 out.go:179] * Using the docker driver based on user configuration
	I1210 23:05:21.471121  279952 start.go:309] selected driver: docker
	I1210 23:05:21.471145  279952 start.go:927] validating driver "docker" against <nil>
	I1210 23:05:21.471160  279952 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:05:21.472520  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.571004  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.553945001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.571242  279952 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:05:21.571571  279952 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:05:21.578337  279952 out.go:179] * Using Docker driver with root privileges
	I1210 23:05:21.580966  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:21.581055  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:21.581069  279952 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:05:21.581180  279952 start.go:353] cluster config:
	{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:21.582782  279952 out.go:179] * Starting "default-k8s-diff-port-443884" primary control-plane node in "default-k8s-diff-port-443884" cluster
	I1210 23:05:21.585021  279952 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:05:21.587372  279952 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:05:21.589118  279952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:05:21.589144  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:21.589177  279952 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:05:21.589190  279952 cache.go:65] Caching tarball of preloaded images
	I1210 23:05:21.589295  279952 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:05:21.589311  279952 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:05:21.589446  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:21.589476  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json: {Name:mkf6ccf560ea7c2158ea0ed416f5c6dd51668fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:21.620171  279952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:05:21.620196  279952 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:05:21.620212  279952 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:05:21.620250  279952 start.go:360] acquireMachinesLock for default-k8s-diff-port-443884: {Name:mk4710330ecf7371e663f4e39eab0b9ebe0090d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:05:21.620352  279952 start.go:364] duration metric: took 82.7µs to acquireMachinesLock for "default-k8s-diff-port-443884"
	I1210 23:05:21.620381  279952 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-44
3884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:21.620476  279952 start.go:125] createHost starting for "" (driver="docker")
	W1210 23:05:20.835197  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:23.334201  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:20.213276  278136 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-468067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (5.160420694s)
	I1210 23:05:20.213311  278136 kic.go:203] duration metric: took 5.160581371s to extract preloaded images to volume ...
	W1210 23:05:20.213421  278136 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:20.213458  278136 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:20.213628  278136 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:20.306959  278136 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-468067 --name embed-certs-468067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-468067 --network embed-certs-468067 --ip 192.168.103.2 --volume embed-certs-468067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:21.298889  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Running}}
	I1210 23:05:21.328925  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.361796  278136 cli_runner.go:164] Run: docker exec embed-certs-468067 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:21.435264  278136 oci.go:144] the created container "embed-certs-468067" has a running status.
	I1210 23:05:21.435296  278136 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa...
	I1210 23:05:21.554156  278136 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:21.588772  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.612161  278136 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:21.612185  278136 kic_runner.go:114] Args: [docker exec --privileged embed-certs-468067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:21.675540  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.696943  278136 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:21.697041  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:21.727545  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:21.728127  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:21.728218  278136 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:21.729164  278136 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59570->127.0.0.1:33079: read: connection reset by peer
	W1210 23:05:22.527416  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:25.026352  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:21.623805  279952 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:05:21.624881  279952 start.go:159] libmachine.API.Create for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:21.624987  279952 client.go:173] LocalClient.Create starting
	I1210 23:05:21.625096  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:05:21.625190  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625214  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625283  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:05:21.625309  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625323  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625872  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:05:21.655788  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:05:21.655978  279952 network_create.go:284] running [docker network inspect default-k8s-diff-port-443884] to gather additional debugging logs...
	I1210 23:05:21.656086  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884
	W1210 23:05:21.679674  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 returned with exit code 1
	I1210 23:05:21.679708  279952 network_create.go:287] error running [docker network inspect default-k8s-diff-port-443884]: docker network inspect default-k8s-diff-port-443884: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-443884 not found
	I1210 23:05:21.679724  279952 network_create.go:289] output of [docker network inspect default-k8s-diff-port-443884]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-443884 not found
	
	** /stderr **
	I1210 23:05:21.679849  279952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:21.703214  279952 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:05:21.704277  279952 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:05:21.705309  279952 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:05:21.706496  279952 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001da1570}
	I1210 23:05:21.706530  279952 network_create.go:124] attempt to create docker network default-k8s-diff-port-443884 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 23:05:21.706582  279952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 default-k8s-diff-port-443884
	I1210 23:05:21.819320  279952 network_create.go:108] docker network default-k8s-diff-port-443884 192.168.76.0/24 created
	I1210 23:05:21.819379  279952 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-443884" container
	I1210 23:05:21.819492  279952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:05:21.839558  279952 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-443884 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:05:21.889515  279952 oci.go:103] Successfully created a docker volume default-k8s-diff-port-443884
	I1210 23:05:21.889621  279952 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-443884-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --entrypoint /usr/bin/test -v default-k8s-diff-port-443884:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:05:22.589872  279952 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-443884
	I1210 23:05:22.589953  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:22.589971  279952 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:05:22.590062  279952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:05:24.880730  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:24.880753  278136 ubuntu.go:182] provisioning hostname "embed-certs-468067"
	I1210 23:05:24.880818  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:24.901219  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:24.901446  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:24.901460  278136 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-468067 && echo "embed-certs-468067" | sudo tee /etc/hostname
	I1210 23:05:25.065733  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:25.065811  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.085124  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.085344  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.085361  278136 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:25.220604  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:25.220634  278136 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:25.220666  278136 ubuntu.go:190] setting up certificates
	I1210 23:05:25.220677  278136 provision.go:84] configureAuth start
	I1210 23:05:25.220737  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:25.241192  278136 provision.go:143] copyHostCerts
	I1210 23:05:25.241268  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:25.241284  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:25.241383  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:25.241538  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:25.241555  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:25.241600  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:25.241727  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:25.241740  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:25.241788  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:25.241886  278136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468067 san=[127.0.0.1 192.168.103.2 embed-certs-468067 localhost minikube]
	I1210 23:05:25.496542  278136 provision.go:177] copyRemoteCerts
	I1210 23:05:25.496634  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:25.496716  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.514526  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:25.614722  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:25.691594  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:05:25.711435  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:25.733589  278136 provision.go:87] duration metric: took 512.897643ms to configureAuth
	I1210 23:05:25.733724  278136 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:25.733949  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:25.734075  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.754610  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.754957  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.754983  278136 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:26.511482  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:26.511510  278136 machine.go:97] duration metric: took 4.814544284s to provisionDockerMachine
	I1210 23:05:26.511524  278136 client.go:176] duration metric: took 12.277945952s to LocalClient.Create
	I1210 23:05:26.511549  278136 start.go:167] duration metric: took 12.278077155s to libmachine.API.Create "embed-certs-468067"
	I1210 23:05:26.511560  278136 start.go:293] postStartSetup for "embed-certs-468067" (driver="docker")
	I1210 23:05:26.511572  278136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:26.511763  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:26.511852  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.532552  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:26.704820  278136 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:26.709721  278136 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:26.709754  278136 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:26.709769  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:26.709845  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:26.709948  278136 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:26.710085  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:26.721562  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:26.848263  278136 start.go:296] duration metric: took 336.688388ms for postStartSetup
	I1210 23:05:26.848691  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:26.873274  278136 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/config.json ...
	I1210 23:05:26.873610  278136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:26.873692  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.900475  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.006888  278136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:27.012829  278136 start.go:128] duration metric: took 12.782191279s to createHost
	I1210 23:05:27.012864  278136 start.go:83] releasing machines lock for "embed-certs-468067", held for 12.782341389s
	I1210 23:05:27.012933  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:27.036898  278136 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:27.036959  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.036970  278136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:27.037076  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.060167  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.060474  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.162188  278136 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:27.226209  278136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:27.275765  278136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:27.281847  278136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:27.281930  278136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:27.318410  278136 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:27.318440  278136 start.go:496] detecting cgroup driver to use...
	I1210 23:05:27.318475  278136 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:27.318526  278136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:27.343038  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:27.364315  278136 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:27.364384  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:27.389787  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:27.413856  278136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:27.541797  278136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:27.670940  278136 docker.go:234] disabling docker service ...
	I1210 23:05:27.671031  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:27.697315  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:27.716184  278136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:27.850931  278136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:27.981061  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:27.996218  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:28.014155  278136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:28.014219  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.051730  278136 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:28.051784  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.065018  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.103431  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.116352  278136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:28.126426  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.145779  278136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.179941  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.228512  278136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:28.238742  278136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:28.248400  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.341055  278136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:28.494660  278136 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:28.494733  278136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:28.499231  278136 start.go:564] Will wait 60s for crictl version
	I1210 23:05:28.499291  278136 ssh_runner.go:195] Run: which crictl
	I1210 23:05:28.503669  278136 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:28.532177  278136 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:28.532269  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.561587  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.592747  278136 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:25.371310  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:27.842945  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:28.594020  278136 cli_runner.go:164] Run: docker network inspect embed-certs-468067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:28.612293  278136 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:28.616598  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.627201  278136 kubeadm.go:884] updating cluster {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:28.627316  278136 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:28.627367  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.661883  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.661902  278136 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:28.661944  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.687014  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.687034  278136 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:28.687041  278136 kubeadm.go:935] updating node { 192.168.103.2  8443 v1.34.2 crio true true} ...
	I1210 23:05:28.687129  278136 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-468067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:28.687190  278136 ssh_runner.go:195] Run: crio config
	I1210 23:05:28.733943  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:28.733974  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:28.733996  278136 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:28.734025  278136 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468067 NodeName:embed-certs-468067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:28.734178  278136 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-468067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:28.734252  278136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:28.742810  278136 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:28.742874  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:28.751108  278136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1210 23:05:28.763770  278136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:28.779326  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1210 23:05:28.792419  278136 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:28.796143  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.806368  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.886347  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:28.915355  278136 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067 for IP: 192.168.103.2
	I1210 23:05:28.915375  278136 certs.go:195] generating shared ca certs ...
	I1210 23:05:28.915391  278136 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:28.915538  278136 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:28.915578  278136 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:28.915589  278136 certs.go:257] generating profile certs ...
	I1210 23:05:28.915662  278136 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key
	I1210 23:05:28.915683  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt with IP's: []
	I1210 23:05:29.071762  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt ...
	I1210 23:05:29.071790  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt: {Name:mke0e555380504e9132d2137e7e3455acb66a23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.071961  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key ...
	I1210 23:05:29.071972  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key: {Name:mkade729adab8303334fe37f8122b250a832c9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.072045  278136 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675
	I1210 23:05:29.072062  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 23:05:29.182555  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 ...
	I1210 23:05:29.182578  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675: {Name:mk79dcee6a7b68243255d08226f8c8ea8df6f017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182744  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 ...
	I1210 23:05:29.182757  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675: {Name:mk10df82a762ea271844528df46692c222a8362f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182829  278136 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt
	I1210 23:05:29.182918  278136 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key
	I1210 23:05:29.182985  278136 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key
	I1210 23:05:29.183000  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt with IP's: []
	I1210 23:05:29.307119  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt ...
	I1210 23:05:29.307141  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt: {Name:mk79ff9e69db8cc3194e716f102e712e2d4d77b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307307  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key ...
	I1210 23:05:29.307320  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key: {Name:mk9ba245274e937db4839af0f85390a9d76968ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307534  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:29.307573  278136 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:29.307584  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:29.307609  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:29.307633  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:29.307667  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:29.307708  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:29.308231  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:29.327101  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:29.346183  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:29.364478  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:29.382184  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 23:05:29.399389  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:05:29.416638  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:29.433809  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:29.452092  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:29.472758  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:29.490967  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:29.509406  278136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:29.522774  278136 ssh_runner.go:195] Run: openssl version
	I1210 23:05:29.529665  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.537656  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:29.545565  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549586  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549666  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.584765  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:29.592832  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:29.600987  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.608754  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:29.616631  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620437  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620484  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.655679  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:29.664002  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:29.672120  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.681216  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:29.689857  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693709  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693766  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.731507  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.739594  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.747821  278136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:29.751615  278136 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:29.751683  278136 kubeadm.go:401] StartCluster: {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:29.751761  278136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:29.751831  278136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:29.777853  278136 cri.go:89] found id: ""
	I1210 23:05:29.777925  278136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:29.786216  278136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:29.794212  278136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:29.794263  278136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:29.801953  278136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:29.801970  278136 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:29.802006  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:05:29.809495  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:29.809549  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:29.817210  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:05:29.825100  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:29.825166  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:29.833323  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.841242  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:29.841302  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.848731  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:05:29.856766  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:29.856814  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:29.865300  278136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:29.902403  278136 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:29.902454  278136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:29.923349  278136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:29.923458  278136 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:29.923512  278136 kubeadm.go:319] OS: Linux
	I1210 23:05:29.923562  278136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:29.923628  278136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:29.923714  278136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:29.923819  278136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:29.923903  278136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:29.923977  278136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:29.924051  278136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:29.924101  278136 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:29.981605  278136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:29.981771  278136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:29.981894  278136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:29.988919  278136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:27.027050  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:29.526193  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:26.862824  279952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.272703863s)
	I1210 23:05:26.862856  279952 kic.go:203] duration metric: took 4.272881051s to extract preloaded images to volume ...
	W1210 23:05:26.862949  279952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:26.862995  279952 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:26.863041  279952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:26.938446  279952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-443884 --name default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --network default-k8s-diff-port-443884 --ip 192.168.76.2 --volume default-k8s-diff-port-443884:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:27.537953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Running}}
	I1210 23:05:27.562632  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.593817  279952 cli_runner.go:164] Run: docker exec default-k8s-diff-port-443884 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:27.651271  279952 oci.go:144] the created container "default-k8s-diff-port-443884" has a running status.
	I1210 23:05:27.651311  279952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa...
	I1210 23:05:27.769585  279952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:27.800953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.828718  279952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:27.828741  279952 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-443884 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:27.889900  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.915356  279952 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:27.915454  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:27.951712  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:27.952036  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:27.952052  279952 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:27.952985  279952 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 23:05:31.088959  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.088990  279952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-443884"
	I1210 23:05:31.089070  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.107804  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.108208  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.108239  279952 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-443884 && echo "default-k8s-diff-port-443884" | sudo tee /etc/hostname
	I1210 23:05:31.254706  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.254790  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.273656  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.273937  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.273961  279952 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-443884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-443884/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-443884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:31.409456  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:31.409482  279952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:31.409529  279952 ubuntu.go:190] setting up certificates
	I1210 23:05:31.409548  279952 provision.go:84] configureAuth start
	I1210 23:05:31.409602  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:31.427336  279952 provision.go:143] copyHostCerts
	I1210 23:05:31.427407  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:31.427418  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:31.427493  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:31.427589  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:31.427598  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:31.427631  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:31.427733  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:31.427742  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:31.427768  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:31.427832  279952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-443884 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-443884 localhost minikube]
	I1210 23:05:31.667347  279952 provision.go:177] copyRemoteCerts
	I1210 23:05:31.667406  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:31.667438  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.686302  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:31.784186  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:31.803562  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 23:05:31.821057  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:31.839727  279952 provision.go:87] duration metric: took 430.167459ms to configureAuth
	I1210 23:05:31.839748  279952 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:31.839920  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:31.840025  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.859548  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.859901  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.859927  279952 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:32.153794  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:32.153821  279952 machine.go:97] duration metric: took 4.238436809s to provisionDockerMachine
	I1210 23:05:32.153835  279952 client.go:176] duration metric: took 10.528837696s to LocalClient.Create
	I1210 23:05:32.153863  279952 start.go:167] duration metric: took 10.528985188s to libmachine.API.Create "default-k8s-diff-port-443884"
	I1210 23:05:32.153875  279952 start.go:293] postStartSetup for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:32.153889  279952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:32.153949  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:32.153985  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.171730  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.270740  279952 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:32.274281  279952 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:32.274307  279952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:32.274319  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:32.274371  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:32.274450  279952 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:32.274542  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:32.282079  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:32.302413  279952 start.go:296] duration metric: took 148.520167ms for postStartSetup
	I1210 23:05:32.302872  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.320682  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:32.321004  279952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:32.321053  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.346274  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.443063  279952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:32.448104  279952 start.go:128] duration metric: took 10.827612732s to createHost
	I1210 23:05:32.448128  279952 start.go:83] releasing machines lock for "default-k8s-diff-port-443884", held for 10.827764504s
	I1210 23:05:32.448198  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.466547  279952 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:32.466597  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.466663  279952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:32.466745  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.486179  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.486510  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.637008  279952 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:32.643974  279952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:32.682605  279952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:32.688290  279952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:32.688368  279952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:32.718783  279952 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:32.718805  279952 start.go:496] detecting cgroup driver to use...
	I1210 23:05:32.718839  279952 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:32.718887  279952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:32.736209  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:32.749128  279952 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:32.749186  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:32.766975  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:32.785140  279952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:32.874331  279952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:32.963222  279952 docker.go:234] disabling docker service ...
	I1210 23:05:32.963291  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:32.982534  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:32.997142  279952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:33.081960  279952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:33.181936  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:33.195465  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:33.210008  279952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:33.210065  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.220700  279952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:33.220765  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.229956  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.239377  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.249068  279952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:33.257305  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.266019  279952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.279712  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.288539  279952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:33.296476  279952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:33.303858  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.389580  279952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:33.538797  279952 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:33.538869  279952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:33.543296  279952 start.go:564] Will wait 60s for crictl version
	I1210 23:05:33.543365  279952 ssh_runner.go:195] Run: which crictl
	I1210 23:05:33.547325  279952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:33.571444  279952 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:33.571514  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.598912  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.630913  279952 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:30.334341  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:32.334430  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:29.991802  278136 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:29.991901  278136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:29.991990  278136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:30.351608  278136 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:30.593176  278136 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:30.755320  278136 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:30.977407  278136 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:31.085043  278136 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:31.085216  278136 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:31.884952  278136 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:31.885114  278136 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:32.128820  278136 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:32.281129  278136 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:33.153677  278136 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:33.153771  278136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:33.283014  278136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:33.675630  278136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:33.759625  278136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:33.814126  278136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:34.008745  278136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:34.009454  278136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:34.013938  278136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:05:33.632188  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:33.650548  279952 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:33.654778  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.665335  279952 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:33.665471  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:33.665522  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.699300  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.699325  279952 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:33.699383  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.725754  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.725775  279952 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:33.725784  279952 kubeadm.go:935] updating node { 192.168.76.2  8444 v1.34.2 crio true true} ...
	I1210 23:05:33.725879  279952 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-443884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:33.725958  279952 ssh_runner.go:195] Run: crio config
	I1210 23:05:33.773897  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:33.773919  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:33.773933  279952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:33.773952  279952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-443884 NodeName:default-k8s-diff-port-443884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:33.774070  279952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-443884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:33.774129  279952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:33.782558  279952 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:33.782623  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:33.790780  279952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 23:05:33.803922  279952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:33.819325  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 23:05:33.833524  279952 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:33.837539  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.847973  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.932121  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:33.960425  279952 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884 for IP: 192.168.76.2
	I1210 23:05:33.960443  279952 certs.go:195] generating shared ca certs ...
	I1210 23:05:33.960462  279952 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:33.960630  279952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:33.960704  279952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:33.960718  279952 certs.go:257] generating profile certs ...
	I1210 23:05:33.960792  279952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key
	I1210 23:05:33.960817  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt with IP's: []
	I1210 23:05:34.057077  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt ...
	I1210 23:05:34.057105  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt: {Name:mk51847952dee09af95f401b00c827a06f5160a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057270  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key ...
	I1210 23:05:34.057282  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key: {Name:mkf375f3b6a63380e9965a3cb09d66e6ff1b51cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057361  279952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94
	I1210 23:05:34.057384  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 23:05:34.136636  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 ...
	I1210 23:05:34.136676  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94: {Name:mk002a91b8c9f2fb4b46891974129537a6ecfc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136847  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 ...
	I1210 23:05:34.136862  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94: {Name:mkd3d0eff1194b75939303cc097dff6606b0b6c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136933  279952 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt
	I1210 23:05:34.137006  279952 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key
	I1210 23:05:34.137066  279952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key
	I1210 23:05:34.137081  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt with IP's: []
	I1210 23:05:34.220084  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt ...
	I1210 23:05:34.220108  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt: {Name:mka111ca179d41320378687d39fe32a1ab401271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220284  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key ...
	I1210 23:05:34.220298  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key: {Name:mkfd978f51ccbb0329e7bc88cc26a4c2dc6d8abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220523  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:34.220562  279952 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:34.220573  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:34.220597  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:34.220621  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:34.220659  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:34.220724  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:34.221261  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:34.240495  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:34.260518  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:34.278207  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:34.295549  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 23:05:34.313819  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:05:34.332779  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:34.351978  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:34.369453  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:34.389088  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:34.406689  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:34.423900  279952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:34.436918  279952 ssh_runner.go:195] Run: openssl version
	I1210 23:05:34.443077  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.451518  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:34.459429  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463331  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463387  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.498849  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:34.506923  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:34.514672  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.522328  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:34.530594  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534511  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534565  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.569396  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.577310  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.585012  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.592934  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:34.600629  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604461  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604515  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.639297  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:34.647330  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:34.655251  279952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:34.659028  279952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:34.659086  279952 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:34.659172  279952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:34.659239  279952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:34.690714  279952 cri.go:89] found id: ""
	I1210 23:05:34.690785  279952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:34.699614  279952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:34.709093  279952 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:34.709144  279952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:34.717328  279952 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:34.717359  279952 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:34.717405  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 23:05:34.725308  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:34.725366  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:34.733106  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 23:05:34.741129  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:34.741182  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:34.749178  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.757226  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:34.757275  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.764816  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 23:05:34.772969  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:34.773022  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:34.781188  279952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:34.830362  279952 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:34.830437  279952 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:34.853117  279952 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:34.853190  279952 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:34.853230  279952 kubeadm.go:319] OS: Linux
	I1210 23:05:34.853297  279952 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:34.853373  279952 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:34.853416  279952 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:34.853458  279952 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:34.853513  279952 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:34.853553  279952 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:34.853661  279952 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:34.853730  279952 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:34.917131  279952 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:34.917280  279952 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:34.917435  279952 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:34.924504  279952 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:31.528280  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:34.026219  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:34.926960  279952 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:34.927084  279952 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:34.927196  279952 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:35.403022  279952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:35.705371  279952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:36.157799  279952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:34.016209  278136 out.go:252]   - Booting up control plane ...
	I1210 23:05:34.016326  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:34.016435  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:34.017554  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:34.032908  278136 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:34.033076  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:34.040913  278136 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:34.041222  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:34.041310  278136 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:34.147564  278136 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:34.147726  278136 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:35.148682  278136 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001186459s
	I1210 23:05:35.151592  278136 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:35.151727  278136 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1210 23:05:35.151852  278136 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:35.151961  278136 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:05:37.115948  278136 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.964278263s
	I1210 23:05:37.326345  278136 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.174576485s
	I1210 23:05:38.653088  278136 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501379838s
	I1210 23:05:38.672660  278136 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:38.682162  278136 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:38.691627  278136 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:38.691817  278136 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-468067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:38.699476  278136 kubeadm.go:319] [bootstrap-token] Using token: vc7tt6.1ma2zdzjremls6oi
	I1210 23:05:36.394195  279952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:36.699432  279952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:36.699668  279952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:36.853566  279952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:36.853729  279952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:37.237894  279952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:37.887346  279952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:38.035256  279952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:38.035414  279952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:38.131597  279952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:38.206508  279952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:38.262108  279952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:38.568290  279952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:38.740049  279952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:38.740793  279952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:38.744608  279952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1210 23:05:34.335263  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:36.833884  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:38.834469  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:38.701150  278136 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:38.701295  278136 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:38.704803  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:38.709973  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:38.712391  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:38.714770  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:38.717330  278136 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:39.059930  278136 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:39.476535  278136 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:40.059845  278136 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:40.060904  278136 kubeadm.go:319] 
	I1210 23:05:40.061003  278136 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:40.061040  278136 kubeadm.go:319] 
	I1210 23:05:40.061181  278136 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:40.061199  278136 kubeadm.go:319] 
	I1210 23:05:40.061232  278136 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:40.061318  278136 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:40.061392  278136 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:40.061401  278136 kubeadm.go:319] 
	I1210 23:05:40.061493  278136 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:40.061510  278136 kubeadm.go:319] 
	I1210 23:05:40.061577  278136 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:40.061588  278136 kubeadm.go:319] 
	I1210 23:05:40.061670  278136 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:40.061826  278136 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:40.061923  278136 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:40.061933  278136 kubeadm.go:319] 
	I1210 23:05:40.062072  278136 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:40.062192  278136 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:40.062214  278136 kubeadm.go:319] 
	I1210 23:05:40.062308  278136 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062443  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:40.062470  278136 kubeadm.go:319] 	--control-plane 
	I1210 23:05:40.062478  278136 kubeadm.go:319] 
	I1210 23:05:40.062582  278136 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:40.062591  278136 kubeadm.go:319] 
	I1210 23:05:40.062719  278136 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062828  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:40.065627  278136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:40.065833  278136 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:40.065868  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:40.065881  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:40.067426  278136 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1210 23:05:36.028634  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:38.526674  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:39.026394  270470 pod_ready.go:94] pod "coredns-5dd5756b68-6mzkn" is "Ready"
	I1210 23:05:39.026418  270470 pod_ready.go:86] duration metric: took 37.006112476s for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.029141  270470 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.032878  270470 pod_ready.go:94] pod "etcd-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.032895  270470 pod_ready.go:86] duration metric: took 3.736841ms for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.035267  270470 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.039084  270470 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.039100  270470 pod_ready.go:86] duration metric: took 3.817017ms for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.041365  270470 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.224222  270470 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.224250  270470 pod_ready.go:86] duration metric: took 182.867637ms for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.425713  270470 pod_ready.go:83] waiting for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.824129  270470 pod_ready.go:94] pod "kube-proxy-nvgl4" is "Ready"
	I1210 23:05:39.824155  270470 pod_ready.go:86] duration metric: took 398.41578ms for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.025046  270470 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.424982  270470 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-280530" is "Ready"
	I1210 23:05:40.425010  270470 pod_ready.go:86] duration metric: took 399.940018ms for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.425028  270470 pod_ready.go:40] duration metric: took 38.409041474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:40.471271  270470 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 23:05:40.472796  270470 out.go:203] 
	W1210 23:05:40.474173  270470 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 23:05:40.475227  270470 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 23:05:40.476535  270470 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-280530" cluster and "default" namespace by default
	I1210 23:05:38.745963  279952 out.go:252]   - Booting up control plane ...
	I1210 23:05:38.746105  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:38.746206  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:38.747825  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:38.762756  279952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:38.762924  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:38.769442  279952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:38.769622  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:38.769715  279952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:38.869128  279952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:38.869246  279952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:40.369850  279952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500867942s
	I1210 23:05:40.374332  279952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:40.374482  279952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1210 23:05:40.374711  279952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:40.374834  279952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 23:05:40.835431  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:43.334516  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:40.068553  278136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:40.073284  278136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:40.073306  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:40.091013  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:40.303352  278136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:40.303417  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.303441  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-468067 minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=embed-certs-468067 minikube.k8s.io/primary=true
	I1210 23:05:40.313293  278136 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:40.378089  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.878855  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.378845  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.878906  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.378433  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.878834  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.378962  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.879108  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.393467  279952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.018399943s
	I1210 23:05:42.394503  279952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.020217138s
	I1210 23:05:44.376449  279952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002089254s
	I1210 23:05:44.394198  279952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:44.405702  279952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:44.416487  279952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:44.416805  279952 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-443884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:44.426438  279952 kubeadm.go:319] [bootstrap-token] Using token: bdnp9h.to2dgl31xr9dkwz5
	I1210 23:05:44.379177  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:44.878480  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.378914  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.449851  278136 kubeadm.go:1114] duration metric: took 5.14650104s to wait for elevateKubeSystemPrivileges
	I1210 23:05:45.449886  278136 kubeadm.go:403] duration metric: took 15.698207011s to StartCluster
	I1210 23:05:45.450011  278136 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.450102  278136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:45.452199  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.452484  278136 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:45.452632  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:45.453102  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:45.453099  278136 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:45.453199  278136 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-468067"
	I1210 23:05:45.453231  278136 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-468067"
	I1210 23:05:45.453261  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.453287  278136 addons.go:70] Setting default-storageclass=true in profile "embed-certs-468067"
	I1210 23:05:45.453309  278136 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-468067"
	I1210 23:05:45.453723  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454265  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454717  278136 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:45.457422  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:45.486553  278136 addons.go:239] Setting addon default-storageclass=true in "embed-certs-468067"
	I1210 23:05:45.486718  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.487325  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.490135  278136 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:44.428776  279952 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:44.428945  279952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:44.431774  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:44.437409  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:44.441061  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:44.443828  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:44.447026  279952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:44.782438  279952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:45.200076  279952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:45.782497  279952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:45.783786  279952 kubeadm.go:319] 
	I1210 23:05:45.783890  279952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:45.783902  279952 kubeadm.go:319] 
	I1210 23:05:45.783990  279952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:45.783998  279952 kubeadm.go:319] 
	I1210 23:05:45.784039  279952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:45.784112  279952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:45.784188  279952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:45.784204  279952 kubeadm.go:319] 
	I1210 23:05:45.784312  279952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:45.784331  279952 kubeadm.go:319] 
	I1210 23:05:45.784396  279952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:45.784406  279952 kubeadm.go:319] 
	I1210 23:05:45.784469  279952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:45.784575  279952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:45.784730  279952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:45.784744  279952 kubeadm.go:319] 
	I1210 23:05:45.784874  279952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:45.784977  279952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:45.784989  279952 kubeadm.go:319] 
	I1210 23:05:45.785081  279952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785190  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:45.785217  279952 kubeadm.go:319] 	--control-plane 
	I1210 23:05:45.785226  279952 kubeadm.go:319] 
	I1210 23:05:45.785345  279952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:45.785356  279952 kubeadm.go:319] 
	I1210 23:05:45.785453  279952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785567  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:45.788874  279952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:45.789027  279952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:45.789056  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:45.789085  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:45.790618  279952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:05:45.492042  278136 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.492059  278136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:45.492115  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.519499  278136 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.519528  278136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:45.519625  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.523139  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.543799  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.561861  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:45.619261  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:45.642303  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.661850  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.731298  278136 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:45.732304  278136 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:05:45.961001  278136 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:45.791839  279952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:45.796562  279952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:45.796582  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:45.811119  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:46.030619  279952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:46.030699  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:46.030765  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-443884 minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=default-k8s-diff-port-443884 minikube.k8s.io/primary=true
	I1210 23:05:46.041384  279952 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:46.113000  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1210 23:05:45.334950  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:45.834730  273929 pod_ready.go:94] pod "coredns-7d764666f9-5tpb8" is "Ready"
	I1210 23:05:45.834762  273929 pod_ready.go:86] duration metric: took 31.506416988s for pod "coredns-7d764666f9-5tpb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.837911  273929 pod_ready.go:83] waiting for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.842136  273929 pod_ready.go:94] pod "etcd-no-preload-092439" is "Ready"
	I1210 23:05:45.842157  273929 pod_ready.go:86] duration metric: took 4.230953ms for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.845582  273929 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.849432  273929 pod_ready.go:94] pod "kube-apiserver-no-preload-092439" is "Ready"
	I1210 23:05:45.849453  273929 pod_ready.go:86] duration metric: took 3.846386ms for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.851434  273929 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.033192  273929 pod_ready.go:94] pod "kube-controller-manager-no-preload-092439" is "Ready"
	I1210 23:05:46.033224  273929 pod_ready.go:86] duration metric: took 181.767834ms for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.232384  273929 pod_ready.go:83] waiting for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.632419  273929 pod_ready.go:94] pod "kube-proxy-gqz42" is "Ready"
	I1210 23:05:46.632450  273929 pod_ready.go:86] duration metric: took 400.040431ms for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.832502  273929 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232861  273929 pod_ready.go:94] pod "kube-scheduler-no-preload-092439" is "Ready"
	I1210 23:05:47.232892  273929 pod_ready.go:86] duration metric: took 400.366591ms for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232908  273929 pod_ready.go:40] duration metric: took 32.909358343s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:47.280508  273929 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:05:47.281991  273929 out.go:179] * Done! kubectl is now configured to use "no-preload-092439" cluster and "default" namespace by default
	I1210 23:05:45.962276  278136 addons.go:530] duration metric: took 509.17689ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:05:46.235747  278136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-468067" context rescaled to 1 replicas
	W1210 23:05:47.735830  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:46.613910  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.113080  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.613875  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.113232  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.613174  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.113224  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.613917  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.113829  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.613873  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.685511  279952 kubeadm.go:1114] duration metric: took 4.654880357s to wait for elevateKubeSystemPrivileges
	I1210 23:05:50.685559  279952 kubeadm.go:403] duration metric: took 16.026470518s to StartCluster
	I1210 23:05:50.685582  279952 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.685709  279952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:50.687466  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.687720  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:50.687732  279952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:50.687802  279952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:50.687909  279952 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687931  279952 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.687946  279952 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687960  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.687976  279952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-443884"
	I1210 23:05:50.687954  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:50.688332  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.688484  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.689939  279952 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:50.691358  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:50.711362  279952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:50.712612  279952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.712632  279952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:50.712715  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.712803  279952 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.712847  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.713267  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.743749  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.746417  279952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.746441  279952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:50.746494  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.770270  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.774861  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:50.833407  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:50.857046  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.880889  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.958517  279952 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:50.959713  279952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:05:51.167487  279952 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:51.168805  279952 addons.go:530] duration metric: took 481.002504ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1210 23:05:49.737595  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	W1210 23:05:52.236343  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:51.462493  279952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-443884" context rescaled to 1 replicas
	W1210 23:05:52.964002  279952 node_ready.go:57] node "default-k8s-diff-port-443884" has "Ready":"False" status (will retry)
	W1210 23:05:55.463440  279952 node_ready.go:57] node "default-k8s-diff-port-443884" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.802075897Z" level=info msg="Created container 016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7/kubernetes-dashboard" id=db95ebd0-49c3-4774-a0dd-8ca9f41f7fd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.803362198Z" level=info msg="Starting container: 016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9" id=7ffadae5-6627-4b28-be7b-9e1aa75f514f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:21 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:21.805417764Z" level=info msg="Started container" PID=1725 containerID=016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7/kubernetes-dashboard id=7ffadae5-6627-4b28-be7b-9e1aa75f514f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed9386104cb56786a48f9a19ae02de367be282587a580a2f4f9120f5bcfac5a5
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.324813858Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dcfe7cbb-599c-41bf-9eaa-153d463a41ed name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.32562719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a33914df-5b42-4ffd-b4a6-b5aacd3228a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.326702288Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4a9ce739-8322-4f8f-b0cf-8f716cbd16b9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.326807563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333025105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333229173Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dc07989e8e74a26d3767eb56ee76505a5f956d244c102b74c0b827fd1e340034/merged/etc/passwd: no such file or directory"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333279117Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dc07989e8e74a26d3767eb56ee76505a5f956d244c102b74c0b827fd1e340034/merged/etc/group: no such file or directory"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.333600251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.369471742Z" level=info msg="Created container 530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8: kube-system/storage-provisioner/storage-provisioner" id=4a9ce739-8322-4f8f-b0cf-8f716cbd16b9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.370141389Z" level=info msg="Starting container: 530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8" id=f79475ea-116a-45c2-a9e9-6f73f943edfb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:32 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:32.371957936Z" level=info msg="Started container" PID=1747 containerID=530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8 description=kube-system/storage-provisioner/storage-provisioner id=f79475ea-116a-45c2-a9e9-6f73f943edfb name=/runtime.v1.RuntimeService/StartContainer sandboxID=d50fe54aaa979788f9b8ecb9f93d35f222562665ddcfc70437c4d651a6da2cb9
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.188354431Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dcffc856-7585-4151-8de8-78b28b83eeb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.189435949Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c2de86c-cdf4-430f-9aba-2adfd359db7a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.19057661Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=f2759c30-10ef-4c03-ba1e-a077a24bc6ad name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.19075642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.196713218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.197244818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.237444349Z" level=info msg="Created container 981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=f2759c30-10ef-4c03-ba1e-a077a24bc6ad name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.238264033Z" level=info msg="Starting container: 981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce" id=81649f3f-910f-47c1-8124-5d32e4415c0f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.240696622Z" level=info msg="Started container" PID=1763 containerID=981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper id=81649f3f-910f-47c1-8124-5d32e4415c0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=d389e558ee85cfd75a6fe5311f6b6de3d2dd675badccc871458f6091728f0a33
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.350354893Z" level=info msg="Removing container: 6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71" id=738572a6-39c8-425b-9cbf-6347dd88582a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:38 old-k8s-version-280530 crio[568]: time="2025-12-10T23:05:38.36077501Z" level=info msg="Removed container 6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt/dashboard-metrics-scraper" id=738572a6-39c8-425b-9cbf-6347dd88582a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	981a583e3f2e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   d389e558ee85c       dashboard-metrics-scraper-5f989dc9cf-2v4wt       kubernetes-dashboard
	530f93f9f6d46       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   d50fe54aaa979       storage-provisioner                              kube-system
	016cec5b1f976       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   ed9386104cb56       kubernetes-dashboard-8694d4445c-2ggd7            kubernetes-dashboard
	465f874eaa2a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   73aaefa4027d2       busybox                                          default
	0d3379629b115       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   1a3d1f5324a3f       coredns-5dd5756b68-6mzkn                         kube-system
	863419c5899dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   d50fe54aaa979       storage-provisioner                              kube-system
	ccd3cfa000099       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   32427597c3942       kube-proxy-nvgl4                                 kube-system
	d646de05be7ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   22a6ae08e9855       kindnet-4g5xn                                    kube-system
	f8d3ca1495f06       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   19e1abd2ab941       kube-scheduler-old-k8s-version-280530            kube-system
	eb0a3103a4593       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   afbf2a806db24       kube-controller-manager-old-k8s-version-280530   kube-system
	90f97cb5df33b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   446fd9fe1d6d1       etcd-old-k8s-version-280530                      kube-system
	ecd4ac1e0021e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   ca99bf00166d8       kube-apiserver-old-k8s-version-280530            kube-system
	
	
	==> coredns [0d3379629b1158229b94163b8b3e32fb962ff33a627229d5e1164b39219c66ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39849 - 52029 "HINFO IN 6907857196277987391.854058816645061022. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.03589211s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-280530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-280530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=old-k8s-version-280530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_03_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-280530
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:05:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:05:31 +0000   Wed, 10 Dec 2025 23:04:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-280530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                467d6f4a-aed3-4ac0-a7b7-07929c2703cf
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-6mzkn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-280530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-4g5xn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-280530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-280530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-nvgl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-280530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2v4wt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2ggd7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-280530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-280530 event: Registered Node old-k8s-version-280530 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-280530 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-280530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-280530 event: Registered Node old-k8s-version-280530 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [90f97cb5df33bb51af20e9b9570f3dd9eee493b40f75a2a5ee449251871d5827] <==
	{"level":"info","ts":"2025-12-10T23:04:57.754883Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:04:57.754923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T23:04:57.756342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T23:04:57.756543Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T23:04:57.756563Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T23:04:57.756667Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T23:04:57.756681Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T23:04:59.544119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T23:04:59.544204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.544211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.544222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.54423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T23:04:59.545189Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-280530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T23:04:59.545212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:04:59.545215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T23:04:59.545398Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T23:04:59.545461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T23:04:59.54646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T23:04:59.546495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-12-10T23:05:18.574915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.589058ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597640007881118 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" mod_revision:564 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" value_size:4090 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:18.575044Z","caller":"traceutil/trace.go:171","msg":"trace[1760717058] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"289.256542ms","start":"2025-12-10T23:05:18.285758Z","end":"2025-12-10T23:05:18.575015Z","steps":["trace[1760717058] 'process raft request'  (duration: 114.974026ms)","trace[1760717058] 'compare'  (duration: 173.443561ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:19.53103Z","caller":"traceutil/trace.go:171","msg":"trace[147593829] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"245.287674ms","start":"2025-12-10T23:05:19.285721Z","end":"2025-12-10T23:05:19.531009Z","steps":["trace[147593829] 'process raft request'  (duration: 245.134786ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.539791Z","caller":"traceutil/trace.go:171","msg":"trace[822923237] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"249.025828ms","start":"2025-12-10T23:05:19.290743Z","end":"2025-12-10T23:05:19.539769Z","steps":["trace[822923237] 'process raft request'  (duration: 248.714068ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:05:57 up 48 min,  0 user,  load average: 4.31, 2.85, 1.84
	Linux old-k8s-version-280530 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d646de05be7ba9022b593e7a4dd5dbd4d5d2786583fa5210b9cfae363a49463f] <==
	I1210 23:05:01.841832       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:01.842133       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:05:01.842291       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:01.842310       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:01.842337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:02.049323       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:02.244151       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:02.244194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:02.245054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:02.544714       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:02.544743       1 metrics.go:72] Registering metrics
	I1210 23:05:02.544848       1 controller.go:711] "Syncing nftables rules"
	I1210 23:05:12.057772       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:12.057820       1 main.go:301] handling current node
	I1210 23:05:22.050761       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:22.050794       1 main.go:301] handling current node
	I1210 23:05:32.049755       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:32.049791       1 main.go:301] handling current node
	I1210 23:05:42.051735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:42.051767       1 main.go:301] handling current node
	I1210 23:05:52.056343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 23:05:52.056379       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ecd4ac1e0021e9f94b202cd98460d0b3cc215f503cfeb56fd64c76f7de1ab756] <==
	I1210 23:05:00.703946       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 23:05:00.704497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 23:05:00.705428       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 23:05:00.707035       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1210 23:05:00.708132       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 23:05:00.706203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:05:00.709015       1 aggregator.go:166] initial CRD sync complete...
	I1210 23:05:00.709059       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 23:05:00.709083       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:05:00.709105       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:05:00.705967       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1210 23:05:00.717686       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:05:00.782449       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:00.792083       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1210 23:05:01.609634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:05:01.848994       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 23:05:01.883276       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 23:05:01.906140       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:01.914925       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:01.922337       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 23:05:01.962327       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.241.49"}
	I1210 23:05:01.975736       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.239.158"}
	I1210 23:05:13.593558       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 23:05:13.602700       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 23:05:13.607898       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [eb0a3103a4593d3942d03941084182840f145923fa99311ab045404007d16faf] <==
	I1210 23:05:13.655900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.263658ms"
	I1210 23:05:13.669628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.048466ms"
	I1210 23:05:13.669905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="139.259µs"
	I1210 23:05:13.675220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.262696ms"
	I1210 23:05:13.675487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.257µs"
	I1210 23:05:13.675500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.512µs"
	I1210 23:05:13.681847       1 shared_informer.go:318] Caches are synced for attach detach
	I1210 23:05:13.692259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.019µs"
	I1210 23:05:13.711865       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1210 23:05:13.715478       1 shared_informer.go:318] Caches are synced for crt configmap
	I1210 23:05:13.724694       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:05:13.738462       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 23:05:13.760469       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1210 23:05:14.169488       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:05:14.169526       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 23:05:14.173102       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 23:05:17.288849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.895µs"
	I1210 23:05:18.576695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.064µs"
	I1210 23:05:19.542266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.558µs"
	I1210 23:05:22.339272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.013663ms"
	I1210 23:05:22.340830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.381431ms"
	I1210 23:05:38.361877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.671µs"
	I1210 23:05:38.744334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.59442ms"
	I1210 23:05:38.744535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.465µs"
	I1210 23:05:43.962797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.457µs"
	
	
	==> kube-proxy [ccd3cfa0000991c0c4b240977487c688c01c7a36e619316c39f65f765528fb4c] <==
	I1210 23:05:01.638575       1 server_others.go:69] "Using iptables proxy"
	I1210 23:05:01.657955       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 23:05:01.684261       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:01.688118       1 server_others.go:152] "Using iptables Proxier"
	I1210 23:05:01.688181       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 23:05:01.688204       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 23:05:01.688298       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 23:05:01.688614       1 server.go:846] "Version info" version="v1.28.0"
	I1210 23:05:01.688636       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:01.689854       1 config.go:315] "Starting node config controller"
	I1210 23:05:01.689881       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 23:05:01.690289       1 config.go:188] "Starting service config controller"
	I1210 23:05:01.690299       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 23:05:01.690411       1 config.go:97] "Starting endpoint slice config controller"
	I1210 23:05:01.690460       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 23:05:01.790041       1 shared_informer.go:318] Caches are synced for node config
	I1210 23:05:01.791196       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 23:05:01.791251       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [f8d3ca1495f0652ef219712ff154638d44b2ec7e87de3362bff617c05c3c1448] <==
	I1210 23:04:58.162091       1 serving.go:348] Generated self-signed cert in-memory
	W1210 23:05:00.668234       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:05:00.668267       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:05:00.668281       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:05:00.668297       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:05:00.703573       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 23:05:00.703606       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:00.707947       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:05:00.708045       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 23:05:00.709821       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 23:05:00.709903       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 23:05:00.808810       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.653594     737 topology_manager.go:215] "Topology Admit Handler" podUID="2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762609     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hsx\" (UniqueName: \"kubernetes.io/projected/2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e-kube-api-access-t6hsx\") pod \"kubernetes-dashboard-8694d4445c-2ggd7\" (UID: \"2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762798     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/6f75636b-4d0c-483a-b9cb-c2d761a57b58-kube-api-access-9vt2j\") pod \"dashboard-metrics-scraper-5f989dc9cf-2v4wt\" (UID: \"6f75636b-4d0c-483a-b9cb-c2d761a57b58\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762864     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2ggd7\" (UID: \"2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7"
	Dec 10 23:05:13 old-k8s-version-280530 kubelet[737]: I1210 23:05:13.762905     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6f75636b-4d0c-483a-b9cb-c2d761a57b58-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-2v4wt\" (UID: \"6f75636b-4d0c-483a-b9cb-c2d761a57b58\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt"
	Dec 10 23:05:17 old-k8s-version-280530 kubelet[737]: I1210 23:05:17.273524     737 scope.go:117] "RemoveContainer" containerID="3a05a81bc7efeb73de833b63decc0ef5e0f85a571dd968a44d159525cd62aa5e"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: I1210 23:05:18.277959     737 scope.go:117] "RemoveContainer" containerID="3a05a81bc7efeb73de833b63decc0ef5e0f85a571dd968a44d159525cd62aa5e"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: I1210 23:05:18.278176     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:18 old-k8s-version-280530 kubelet[737]: E1210 23:05:18.278548     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:19 old-k8s-version-280530 kubelet[737]: I1210 23:05:19.282134     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:19 old-k8s-version-280530 kubelet[737]: E1210 23:05:19.282386     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:23 old-k8s-version-280530 kubelet[737]: I1210 23:05:23.952365     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:23 old-k8s-version-280530 kubelet[737]: E1210 23:05:23.952807     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:32 old-k8s-version-280530 kubelet[737]: I1210 23:05:32.324308     737 scope.go:117] "RemoveContainer" containerID="863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91"
	Dec 10 23:05:32 old-k8s-version-280530 kubelet[737]: I1210 23:05:32.339420     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2ggd7" podStartSLOduration=11.605826327 podCreationTimestamp="2025-12-10 23:05:13 +0000 UTC" firstStartedPulling="2025-12-10 23:05:13.997024489 +0000 UTC m=+16.906725164" lastFinishedPulling="2025-12-10 23:05:21.7305201 +0000 UTC m=+24.640220772" observedRunningTime="2025-12-10 23:05:22.317974407 +0000 UTC m=+25.227675088" watchObservedRunningTime="2025-12-10 23:05:32.339321935 +0000 UTC m=+35.249022620"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.187540     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.348769     737 scope.go:117] "RemoveContainer" containerID="6da26e96329edbcb77f9eea38ab8b0769c50f654aa18c7a203025c18150f0d71"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: I1210 23:05:38.348996     737 scope.go:117] "RemoveContainer" containerID="981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	Dec 10 23:05:38 old-k8s-version-280530 kubelet[737]: E1210 23:05:38.349396     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:43 old-k8s-version-280530 kubelet[737]: I1210 23:05:43.952225     737 scope.go:117] "RemoveContainer" containerID="981a583e3f2e8a29affc572868dd69901d1aa3a3f2802342b57c1f4d16810bce"
	Dec 10 23:05:43 old-k8s-version-280530 kubelet[737]: E1210 23:05:43.952512     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2v4wt_kubernetes-dashboard(6f75636b-4d0c-483a-b9cb-c2d761a57b58)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2v4wt" podUID="6f75636b-4d0c-483a-b9cb-c2d761a57b58"
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:05:52 old-k8s-version-280530 systemd[1]: kubelet.service: Consumed 1.636s CPU time.
	
	
	==> kubernetes-dashboard [016cec5b1f976effd1e6bdc9e7ccec0ae87762520d677174c9844f0a096c6bd9] <==
	2025/12/10 23:05:21 Using namespace: kubernetes-dashboard
	2025/12/10 23:05:21 Using in-cluster config to connect to apiserver
	2025/12/10 23:05:21 Using secret token for csrf signing
	2025/12/10 23:05:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:05:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:05:21 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 23:05:21 Generating JWE encryption key
	2025/12/10 23:05:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:05:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:05:22 Initializing JWE encryption key from synchronized object
	2025/12/10 23:05:22 Creating in-cluster Sidecar client
	2025/12/10 23:05:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:22 Serving insecurely on HTTP port: 9090
	2025/12/10 23:05:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:21 Starting overwatch
	
	
	==> storage-provisioner [530f93f9f6d46bd9c777b6c7a464d171b8f46c0b8ffdf9d16ff43becdae842a8] <==
	I1210 23:05:32.384714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:05:32.393933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:05:32.393976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 23:05:49.791278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:05:49.791440       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d!
	I1210 23:05:49.791425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54431db1-ea80-4659-b536-d1e109546d8c", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d became leader
	I1210 23:05:49.891688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-280530_54a1e1c3-a9e6-40f3-9a6b-2e4c7099f74d!
	
	
	==> storage-provisioner [863419c5899dcd48454e155e680a84c4c173f4b24f24bdc678a6fd7f4bc44f91] <==
	I1210 23:05:01.586446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:05:31.591190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280530 -n old-k8s-version-280530
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-280530 -n old-k8s-version-280530: exit status 2 (323.599153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-280530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-092439 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-092439 --alsologtostderr -v=1: exit status 80 (1.560172577s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-092439 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:05:59.031425  287452 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:05:59.031518  287452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:59.031527  287452 out.go:374] Setting ErrFile to fd 2...
	I1210 23:05:59.031531  287452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:59.031805  287452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:05:59.032059  287452 out.go:368] Setting JSON to false
	I1210 23:05:59.032082  287452 mustload.go:66] Loading cluster: no-preload-092439
	I1210 23:05:59.032433  287452 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:05:59.032871  287452 cli_runner.go:164] Run: docker container inspect no-preload-092439 --format={{.State.Status}}
	I1210 23:05:59.051911  287452 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:05:59.052164  287452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:59.108588  287452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 23:05:59.097949365 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:59.109280  287452 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:no-preload-092439 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=false) vm
-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:05:59.111428  287452 out.go:179] * Pausing node no-preload-092439 ... 
	I1210 23:05:59.112572  287452 host.go:66] Checking if "no-preload-092439" exists ...
	I1210 23:05:59.112845  287452 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:59.112883  287452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-092439
	I1210 23:05:59.131188  287452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/no-preload-092439/id_rsa Username:docker}
	I1210 23:05:59.227704  287452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:59.240271  287452 pause.go:52] kubelet running: true
	I1210 23:05:59.240338  287452 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:59.407509  287452 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:59.407614  287452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:59.482498  287452 cri.go:89] found id: "db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00"
	I1210 23:05:59.482545  287452 cri.go:89] found id: "3bf4f7155c432603b41a6c12c2954315b96cd1a34c84a2e13f9a7a39e46ef3cd"
	I1210 23:05:59.482554  287452 cri.go:89] found id: "c3ef0ffa9ede88313d5564b45adfa71559c2b439d058f7d23b302fa80b482168"
	I1210 23:05:59.482560  287452 cri.go:89] found id: "8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714"
	I1210 23:05:59.482565  287452 cri.go:89] found id: "8173be3b7c05b175f4824b0b205d6e0ac2d5ea31cc37448e3cf92b819a82793d"
	I1210 23:05:59.482570  287452 cri.go:89] found id: "6ee26bc7c96ed586eff3850cfe0f16397254e657370bfce96dc19153353ccd40"
	I1210 23:05:59.482576  287452 cri.go:89] found id: "47d48a88aaf2f336aaf052c8e06ba295472eb4a8dc9582731814742da2d715a2"
	I1210 23:05:59.482598  287452 cri.go:89] found id: "017238cc878d0c921bda71833bcc5f0f7afe24f51f551351e5cf67faa077db1e"
	I1210 23:05:59.482604  287452 cri.go:89] found id: "9e0d3af710c80ebeda3c5932ad2b93927ce199ee5ed52ebdded84495b7ed024b"
	I1210 23:05:59.482616  287452 cri.go:89] found id: "4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	I1210 23:05:59.482622  287452 cri.go:89] found id: "efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5"
	I1210 23:05:59.482630  287452 cri.go:89] found id: ""
	I1210 23:05:59.482689  287452 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:59.495456  287452 retry.go:31] will retry after 164.459308ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:59Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:05:59.660903  287452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:59.673655  287452 pause.go:52] kubelet running: false
	I1210 23:05:59.673718  287452 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:05:59.813632  287452 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:05:59.813738  287452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:05:59.881427  287452 cri.go:89] found id: "db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00"
	I1210 23:05:59.881457  287452 cri.go:89] found id: "3bf4f7155c432603b41a6c12c2954315b96cd1a34c84a2e13f9a7a39e46ef3cd"
	I1210 23:05:59.881461  287452 cri.go:89] found id: "c3ef0ffa9ede88313d5564b45adfa71559c2b439d058f7d23b302fa80b482168"
	I1210 23:05:59.881464  287452 cri.go:89] found id: "8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714"
	I1210 23:05:59.881467  287452 cri.go:89] found id: "8173be3b7c05b175f4824b0b205d6e0ac2d5ea31cc37448e3cf92b819a82793d"
	I1210 23:05:59.881470  287452 cri.go:89] found id: "6ee26bc7c96ed586eff3850cfe0f16397254e657370bfce96dc19153353ccd40"
	I1210 23:05:59.881473  287452 cri.go:89] found id: "47d48a88aaf2f336aaf052c8e06ba295472eb4a8dc9582731814742da2d715a2"
	I1210 23:05:59.881476  287452 cri.go:89] found id: "017238cc878d0c921bda71833bcc5f0f7afe24f51f551351e5cf67faa077db1e"
	I1210 23:05:59.881478  287452 cri.go:89] found id: "9e0d3af710c80ebeda3c5932ad2b93927ce199ee5ed52ebdded84495b7ed024b"
	I1210 23:05:59.881492  287452 cri.go:89] found id: "4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	I1210 23:05:59.881497  287452 cri.go:89] found id: "efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5"
	I1210 23:05:59.881502  287452 cri.go:89] found id: ""
	I1210 23:05:59.881549  287452 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:05:59.894670  287452 retry.go:31] will retry after 346.241877ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:05:59Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:00.241224  287452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:00.254257  287452 pause.go:52] kubelet running: false
	I1210 23:06:00.254312  287452 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:06:00.414956  287452 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:06:00.415032  287452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:06:00.489476  287452 cri.go:89] found id: "db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00"
	I1210 23:06:00.489520  287452 cri.go:89] found id: "3bf4f7155c432603b41a6c12c2954315b96cd1a34c84a2e13f9a7a39e46ef3cd"
	I1210 23:06:00.489528  287452 cri.go:89] found id: "c3ef0ffa9ede88313d5564b45adfa71559c2b439d058f7d23b302fa80b482168"
	I1210 23:06:00.489534  287452 cri.go:89] found id: "8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714"
	I1210 23:06:00.489538  287452 cri.go:89] found id: "8173be3b7c05b175f4824b0b205d6e0ac2d5ea31cc37448e3cf92b819a82793d"
	I1210 23:06:00.489543  287452 cri.go:89] found id: "6ee26bc7c96ed586eff3850cfe0f16397254e657370bfce96dc19153353ccd40"
	I1210 23:06:00.489551  287452 cri.go:89] found id: "47d48a88aaf2f336aaf052c8e06ba295472eb4a8dc9582731814742da2d715a2"
	I1210 23:06:00.489556  287452 cri.go:89] found id: "017238cc878d0c921bda71833bcc5f0f7afe24f51f551351e5cf67faa077db1e"
	I1210 23:06:00.489560  287452 cri.go:89] found id: "9e0d3af710c80ebeda3c5932ad2b93927ce199ee5ed52ebdded84495b7ed024b"
	I1210 23:06:00.489576  287452 cri.go:89] found id: "4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	I1210 23:06:00.489584  287452 cri.go:89] found id: "efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5"
	I1210 23:06:00.489588  287452 cri.go:89] found id: ""
	I1210 23:06:00.489634  287452 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:06:00.507423  287452 out.go:203] 
	W1210 23:06:00.509351  287452 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:06:00.509370  287452 out.go:285] * 
	* 
	W1210 23:06:00.513325  287452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:06:00.515504  287452 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-092439 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-092439
helpers_test.go:244: (dbg) docker inspect no-preload-092439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	        "Created": "2025-12-10T23:03:49.807359238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:05:04.10216554Z",
	            "FinishedAt": "2025-12-10T23:05:03.161555362Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hostname",
	        "HostsPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hosts",
	        "LogPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213-json.log",
	        "Name": "/no-preload-092439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-092439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-092439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	                "LowerDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-092439",
	                "Source": "/var/lib/docker/volumes/no-preload-092439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-092439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-092439",
	                "name.minikube.sigs.k8s.io": "no-preload-092439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b50e25863e078713840c184013bdc1b5c9b6fc28f353f6b29581045492112b5f",
	            "SandboxKey": "/var/run/docker/netns/b50e25863e07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-092439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9adf045f08f3157cc4b3a22d4d1229edfd6c1e8d22978b4ef7f6f7a0d83df92c",
	                    "EndpointID": "63a2d812f7cd710c1e1dbda450fea4335735f6489607e8111d6fb3806be2545f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:95:0b:1e:01:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-092439",
	                        "08ed46fd1dff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439: exit status 2 (329.718398ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-092439 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-092439 logs -n 25: (1.350824797s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p NoKubernetes-508535 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                        │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                  │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                     │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                               │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                               │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                    │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:05:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:05:21.315417  279952 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:05:21.315552  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315558  279952 out.go:374] Setting ErrFile to fd 2...
	I1210 23:05:21.315563  279952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:05:21.315908  279952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:05:21.316533  279952 out.go:368] Setting JSON to false
	I1210 23:05:21.318152  279952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2863,"bootTime":1765405058,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:05:21.318230  279952 start.go:143] virtualization: kvm guest
	I1210 23:05:21.321680  279952 out.go:179] * [default-k8s-diff-port-443884] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:05:21.323296  279952 notify.go:221] Checking for updates...
	I1210 23:05:21.323311  279952 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:05:21.325578  279952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:05:21.327595  279952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:21.329578  279952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:05:21.331385  279952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:05:21.333078  279952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:05:21.335474  279952 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:21.335731  279952 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:05:21.336011  279952 config.go:182] Loaded profile config "old-k8s-version-280530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 23:05:21.336212  279952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:05:21.377288  279952 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:05:21.377534  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.465505  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.452703979 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.465709  279952 docker.go:319] overlay module found
	I1210 23:05:21.469448  279952 out.go:179] * Using the docker driver based on user configuration
	I1210 23:05:21.471121  279952 start.go:309] selected driver: docker
	I1210 23:05:21.471145  279952 start.go:927] validating driver "docker" against <nil>
	I1210 23:05:21.471160  279952 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:05:21.472520  279952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:05:21.571004  279952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-10 23:05:21.553945001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:05:21.571242  279952 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:05:21.571571  279952 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:05:21.578337  279952 out.go:179] * Using Docker driver with root privileges
	I1210 23:05:21.580966  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:21.581055  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:21.581069  279952 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:05:21.581180  279952 start.go:353] cluster config:
	{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:21.582782  279952 out.go:179] * Starting "default-k8s-diff-port-443884" primary control-plane node in "default-k8s-diff-port-443884" cluster
	I1210 23:05:21.585021  279952 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:05:21.587372  279952 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:05:21.589118  279952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:05:21.589144  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:21.589177  279952 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:05:21.589190  279952 cache.go:65] Caching tarball of preloaded images
	I1210 23:05:21.589295  279952 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:05:21.589311  279952 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:05:21.589446  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:21.589476  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json: {Name:mkf6ccf560ea7c2158ea0ed416f5c6dd51668fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:21.620171  279952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:05:21.620196  279952 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:05:21.620212  279952 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:05:21.620250  279952 start.go:360] acquireMachinesLock for default-k8s-diff-port-443884: {Name:mk4710330ecf7371e663f4e39eab0b9ebe0090d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:05:21.620352  279952 start.go:364] duration metric: took 82.7µs to acquireMachinesLock for "default-k8s-diff-port-443884"
	I1210 23:05:21.620381  279952 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-44
3884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:21.620476  279952 start.go:125] createHost starting for "" (driver="docker")
	W1210 23:05:20.835197  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:23.334201  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:20.213276  278136 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-468067:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (5.160420694s)
	I1210 23:05:20.213311  278136 kic.go:203] duration metric: took 5.160581371s to extract preloaded images to volume ...
	W1210 23:05:20.213421  278136 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:20.213458  278136 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:20.213628  278136 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:20.306959  278136 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-468067 --name embed-certs-468067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-468067 --network embed-certs-468067 --ip 192.168.103.2 --volume embed-certs-468067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:21.298889  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Running}}
	I1210 23:05:21.328925  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.361796  278136 cli_runner.go:164] Run: docker exec embed-certs-468067 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:21.435264  278136 oci.go:144] the created container "embed-certs-468067" has a running status.
	I1210 23:05:21.435296  278136 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa...
	I1210 23:05:21.554156  278136 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:21.588772  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.612161  278136 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:21.612185  278136 kic_runner.go:114] Args: [docker exec --privileged embed-certs-468067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:21.675540  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:21.696943  278136 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:21.697041  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:21.727545  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:21.728127  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:21.728218  278136 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:21.729164  278136 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59570->127.0.0.1:33079: read: connection reset by peer
	W1210 23:05:22.527416  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:25.026352  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:21.623805  279952 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:05:21.624881  279952 start.go:159] libmachine.API.Create for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:21.624987  279952 client.go:173] LocalClient.Create starting
	I1210 23:05:21.625096  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:05:21.625190  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625214  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625283  279952 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:05:21.625309  279952 main.go:143] libmachine: Decoding PEM data...
	I1210 23:05:21.625323  279952 main.go:143] libmachine: Parsing certificate...
	I1210 23:05:21.625872  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:05:21.655788  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:05:21.655978  279952 network_create.go:284] running [docker network inspect default-k8s-diff-port-443884] to gather additional debugging logs...
	I1210 23:05:21.656086  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884
	W1210 23:05:21.679674  279952 cli_runner.go:211] docker network inspect default-k8s-diff-port-443884 returned with exit code 1
	I1210 23:05:21.679708  279952 network_create.go:287] error running [docker network inspect default-k8s-diff-port-443884]: docker network inspect default-k8s-diff-port-443884: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-443884 not found
	I1210 23:05:21.679724  279952 network_create.go:289] output of [docker network inspect default-k8s-diff-port-443884]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-443884 not found
	
	** /stderr **
	I1210 23:05:21.679849  279952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:21.703214  279952 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:05:21.704277  279952 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:05:21.705309  279952 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:05:21.706496  279952 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001da1570}
	I1210 23:05:21.706530  279952 network_create.go:124] attempt to create docker network default-k8s-diff-port-443884 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 23:05:21.706582  279952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 default-k8s-diff-port-443884
	I1210 23:05:21.819320  279952 network_create.go:108] docker network default-k8s-diff-port-443884 192.168.76.0/24 created
	I1210 23:05:21.819379  279952 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-443884" container
	I1210 23:05:21.819492  279952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:05:21.839558  279952 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-443884 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:05:21.889515  279952 oci.go:103] Successfully created a docker volume default-k8s-diff-port-443884
	I1210 23:05:21.889621  279952 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-443884-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --entrypoint /usr/bin/test -v default-k8s-diff-port-443884:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:05:22.589872  279952 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-443884
	I1210 23:05:22.589953  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:22.589971  279952 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:05:22.590062  279952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:05:24.880730  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:24.880753  278136 ubuntu.go:182] provisioning hostname "embed-certs-468067"
	I1210 23:05:24.880818  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:24.901219  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:24.901446  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:24.901460  278136 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-468067 && echo "embed-certs-468067" | sudo tee /etc/hostname
	I1210 23:05:25.065733  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468067
	
	I1210 23:05:25.065811  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.085124  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.085344  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.085361  278136 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:25.220604  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:25.220634  278136 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:25.220666  278136 ubuntu.go:190] setting up certificates
	I1210 23:05:25.220677  278136 provision.go:84] configureAuth start
	I1210 23:05:25.220737  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:25.241192  278136 provision.go:143] copyHostCerts
	I1210 23:05:25.241268  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:25.241284  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:25.241383  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:25.241538  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:25.241555  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:25.241600  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:25.241727  278136 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:25.241740  278136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:25.241788  278136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:25.241886  278136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468067 san=[127.0.0.1 192.168.103.2 embed-certs-468067 localhost minikube]
	I1210 23:05:25.496542  278136 provision.go:177] copyRemoteCerts
	I1210 23:05:25.496634  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:25.496716  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.514526  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:25.614722  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:25.691594  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:05:25.711435  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:25.733589  278136 provision.go:87] duration metric: took 512.897643ms to configureAuth
	I1210 23:05:25.733724  278136 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:25.733949  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:25.734075  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:25.754610  278136 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:25.754957  278136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33079 <nil> <nil>}
	I1210 23:05:25.754983  278136 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:26.511482  278136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:26.511510  278136 machine.go:97] duration metric: took 4.814544284s to provisionDockerMachine
	I1210 23:05:26.511524  278136 client.go:176] duration metric: took 12.277945952s to LocalClient.Create
	I1210 23:05:26.511549  278136 start.go:167] duration metric: took 12.278077155s to libmachine.API.Create "embed-certs-468067"
	I1210 23:05:26.511560  278136 start.go:293] postStartSetup for "embed-certs-468067" (driver="docker")
	I1210 23:05:26.511572  278136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:26.511763  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:26.511852  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.532552  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:26.704820  278136 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:26.709721  278136 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:26.709754  278136 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:26.709769  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:26.709845  278136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:26.709948  278136 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:26.710085  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:26.721562  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:26.848263  278136 start.go:296] duration metric: took 336.688388ms for postStartSetup
	I1210 23:05:26.848691  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:26.873274  278136 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/config.json ...
	I1210 23:05:26.873610  278136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:26.873692  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:26.900475  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.006888  278136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:27.012829  278136 start.go:128] duration metric: took 12.782191279s to createHost
	I1210 23:05:27.012864  278136 start.go:83] releasing machines lock for "embed-certs-468067", held for 12.782341389s
	I1210 23:05:27.012933  278136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468067
	I1210 23:05:27.036898  278136 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:27.036959  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.036970  278136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:27.037076  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:27.060167  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.060474  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:27.162188  278136 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:27.226209  278136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:27.275765  278136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:27.281847  278136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:27.281930  278136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:27.318410  278136 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:27.318440  278136 start.go:496] detecting cgroup driver to use...
	I1210 23:05:27.318475  278136 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:27.318526  278136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:27.343038  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:27.364315  278136 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:27.364384  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:27.389787  278136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:27.413856  278136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:27.541797  278136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:27.670940  278136 docker.go:234] disabling docker service ...
	I1210 23:05:27.671031  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:27.697315  278136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:27.716184  278136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:27.850931  278136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:27.981061  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:27.996218  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:28.014155  278136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:28.014219  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.051730  278136 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:28.051784  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.065018  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.103431  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.116352  278136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:28.126426  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.145779  278136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.179941  278136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:28.228512  278136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:28.238742  278136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:28.248400  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.341055  278136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:28.494660  278136 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:28.494733  278136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:28.499231  278136 start.go:564] Will wait 60s for crictl version
	I1210 23:05:28.499291  278136 ssh_runner.go:195] Run: which crictl
	I1210 23:05:28.503669  278136 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:28.532177  278136 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:28.532269  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.561587  278136 ssh_runner.go:195] Run: crio --version
	I1210 23:05:28.592747  278136 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:25.371310  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:27.842945  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:28.594020  278136 cli_runner.go:164] Run: docker network inspect embed-certs-468067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:28.612293  278136 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:28.616598  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.627201  278136 kubeadm.go:884] updating cluster {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:28.627316  278136 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:28.627367  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.661883  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.661902  278136 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:28.661944  278136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:28.687014  278136 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:28.687034  278136 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:28.687041  278136 kubeadm.go:935] updating node { 192.168.103.2  8443 v1.34.2 crio true true} ...
	I1210 23:05:28.687129  278136 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-468067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:28.687190  278136 ssh_runner.go:195] Run: crio config
	I1210 23:05:28.733943  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:28.733974  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:28.733996  278136 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:28.734025  278136 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468067 NodeName:embed-certs-468067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:28.734178  278136 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-468067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:28.734252  278136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:28.742810  278136 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:28.742874  278136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:28.751108  278136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1210 23:05:28.763770  278136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:28.779326  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1210 23:05:28.792419  278136 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:28.796143  278136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:28.806368  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:28.886347  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:28.915355  278136 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067 for IP: 192.168.103.2
	I1210 23:05:28.915375  278136 certs.go:195] generating shared ca certs ...
	I1210 23:05:28.915391  278136 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:28.915538  278136 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:28.915578  278136 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:28.915589  278136 certs.go:257] generating profile certs ...
	I1210 23:05:28.915662  278136 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key
	I1210 23:05:28.915683  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt with IP's: []
	I1210 23:05:29.071762  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt ...
	I1210 23:05:29.071790  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.crt: {Name:mke0e555380504e9132d2137e7e3455acb66a23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.071961  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key ...
	I1210 23:05:29.071972  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/client.key: {Name:mkade729adab8303334fe37f8122b250a832c9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.072045  278136 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675
	I1210 23:05:29.072062  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 23:05:29.182555  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 ...
	I1210 23:05:29.182578  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675: {Name:mk79dcee6a7b68243255d08226f8c8ea8df6f017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182744  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 ...
	I1210 23:05:29.182757  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675: {Name:mk10df82a762ea271844528df46692c222a8362f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.182829  278136 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt
	I1210 23:05:29.182918  278136 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key.06291675 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key
	I1210 23:05:29.182985  278136 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key
	I1210 23:05:29.183000  278136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt with IP's: []
	I1210 23:05:29.307119  278136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt ...
	I1210 23:05:29.307141  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt: {Name:mk79ff9e69db8cc3194e716f102e712e2d4d77b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307307  278136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key ...
	I1210 23:05:29.307320  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key: {Name:mk9ba245274e937db4839af0f85390a9d76968ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:29.307534  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:29.307573  278136 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:29.307584  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:29.307609  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:29.307633  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:29.307667  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:29.307708  278136 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:29.308231  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:29.327101  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:29.346183  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:29.364478  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:29.382184  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 23:05:29.399389  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:05:29.416638  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:29.433809  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:29.452092  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:29.472758  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:29.490967  278136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:29.509406  278136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:29.522774  278136 ssh_runner.go:195] Run: openssl version
	I1210 23:05:29.529665  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.537656  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:29.545565  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549586  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.549666  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:29.584765  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:29.592832  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:29.600987  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.608754  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:29.616631  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620437  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.620484  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:29.655679  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:29.664002  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:29.672120  278136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.681216  278136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:29.689857  278136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693709  278136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.693766  278136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:29.731507  278136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.739594  278136 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:29.747821  278136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:29.751615  278136 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:29.751683  278136 kubeadm.go:401] StartCluster: {Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:29.751761  278136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:29.751831  278136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:29.777853  278136 cri.go:89] found id: ""
	I1210 23:05:29.777925  278136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:29.786216  278136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:29.794212  278136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:29.794263  278136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:29.801953  278136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:29.801970  278136 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:29.802006  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:05:29.809495  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:29.809549  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:29.817210  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:05:29.825100  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:29.825166  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:29.833323  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.841242  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:29.841302  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:29.848731  278136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:05:29.856766  278136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:29.856814  278136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:29.865300  278136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:29.902403  278136 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:29.902454  278136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:29.923349  278136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:29.923458  278136 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:29.923512  278136 kubeadm.go:319] OS: Linux
	I1210 23:05:29.923562  278136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:29.923628  278136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:29.923714  278136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:29.923819  278136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:29.923903  278136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:29.923977  278136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:29.924051  278136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:29.924101  278136 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:29.981605  278136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:29.981771  278136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:29.981894  278136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:29.988919  278136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:27.027050  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:29.526193  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:26.862824  279952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-443884:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.272703863s)
	I1210 23:05:26.862856  279952 kic.go:203] duration metric: took 4.272881051s to extract preloaded images to volume ...
	W1210 23:05:26.862949  279952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:05:26.862995  279952 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:05:26.863041  279952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:05:26.938446  279952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-443884 --name default-k8s-diff-port-443884 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-443884 --network default-k8s-diff-port-443884 --ip 192.168.76.2 --volume default-k8s-diff-port-443884:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:05:27.537953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Running}}
	I1210 23:05:27.562632  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.593817  279952 cli_runner.go:164] Run: docker exec default-k8s-diff-port-443884 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:05:27.651271  279952 oci.go:144] the created container "default-k8s-diff-port-443884" has a running status.
	I1210 23:05:27.651311  279952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa...
	I1210 23:05:27.769585  279952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:05:27.800953  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.828718  279952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:05:27.828741  279952 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-443884 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:05:27.889900  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:27.915356  279952 machine.go:94] provisionDockerMachine start ...
	I1210 23:05:27.915454  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:27.951712  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:27.952036  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:27.952052  279952 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:05:27.952985  279952 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 23:05:31.088959  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.088990  279952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-443884"
	I1210 23:05:31.089070  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.107804  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.108208  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.108239  279952 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-443884 && echo "default-k8s-diff-port-443884" | sudo tee /etc/hostname
	I1210 23:05:31.254706  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:05:31.254790  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.273656  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.273937  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.273961  279952 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-443884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-443884/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-443884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:05:31.409456  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:05:31.409482  279952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:05:31.409529  279952 ubuntu.go:190] setting up certificates
	I1210 23:05:31.409548  279952 provision.go:84] configureAuth start
	I1210 23:05:31.409602  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:31.427336  279952 provision.go:143] copyHostCerts
	I1210 23:05:31.427407  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:05:31.427418  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:05:31.427493  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:05:31.427589  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:05:31.427598  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:05:31.427631  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:05:31.427733  279952 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:05:31.427742  279952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:05:31.427768  279952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:05:31.427832  279952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-443884 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-443884 localhost minikube]
	I1210 23:05:31.667347  279952 provision.go:177] copyRemoteCerts
	I1210 23:05:31.667406  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:05:31.667438  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.686302  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:31.784186  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:05:31.803562  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 23:05:31.821057  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:05:31.839727  279952 provision.go:87] duration metric: took 430.167459ms to configureAuth
	I1210 23:05:31.839748  279952 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:05:31.839920  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:31.840025  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:31.859548  279952 main.go:143] libmachine: Using SSH client type: native
	I1210 23:05:31.859901  279952 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1210 23:05:31.859927  279952 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:05:32.153794  279952 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:05:32.153821  279952 machine.go:97] duration metric: took 4.238436809s to provisionDockerMachine
	I1210 23:05:32.153835  279952 client.go:176] duration metric: took 10.528837696s to LocalClient.Create
	I1210 23:05:32.153863  279952 start.go:167] duration metric: took 10.528985188s to libmachine.API.Create "default-k8s-diff-port-443884"
	I1210 23:05:32.153875  279952 start.go:293] postStartSetup for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:05:32.153889  279952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:05:32.153949  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:05:32.153985  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.171730  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.270740  279952 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:05:32.274281  279952 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:05:32.274307  279952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:05:32.274319  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:05:32.274371  279952 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:05:32.274450  279952 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:05:32.274542  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:05:32.282079  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:32.302413  279952 start.go:296] duration metric: took 148.520167ms for postStartSetup
	I1210 23:05:32.302872  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.320682  279952 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:05:32.321004  279952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:05:32.321053  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.346274  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.443063  279952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:05:32.448104  279952 start.go:128] duration metric: took 10.827612732s to createHost
	I1210 23:05:32.448128  279952 start.go:83] releasing machines lock for "default-k8s-diff-port-443884", held for 10.827764504s
	I1210 23:05:32.448198  279952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:05:32.466547  279952 ssh_runner.go:195] Run: cat /version.json
	I1210 23:05:32.466597  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.466663  279952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:05:32.466745  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:32.486179  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.486510  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:32.637008  279952 ssh_runner.go:195] Run: systemctl --version
	I1210 23:05:32.643974  279952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:05:32.682605  279952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:05:32.688290  279952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:05:32.688368  279952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:05:32.718783  279952 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:05:32.718805  279952 start.go:496] detecting cgroup driver to use...
	I1210 23:05:32.718839  279952 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:05:32.718887  279952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:05:32.736209  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:05:32.749128  279952 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:05:32.749186  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:05:32.766975  279952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:05:32.785140  279952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:05:32.874331  279952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:05:32.963222  279952 docker.go:234] disabling docker service ...
	I1210 23:05:32.963291  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:05:32.982534  279952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:05:32.997142  279952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:05:33.081960  279952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:05:33.181936  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:05:33.195465  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:05:33.210008  279952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:05:33.210065  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.220700  279952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:05:33.220765  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.229956  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.239377  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.249068  279952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:05:33.257305  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.266019  279952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.279712  279952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:05:33.288539  279952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:05:33.296476  279952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:05:33.303858  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.389580  279952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:05:33.538797  279952 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:05:33.538869  279952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:05:33.543296  279952 start.go:564] Will wait 60s for crictl version
	I1210 23:05:33.543365  279952 ssh_runner.go:195] Run: which crictl
	I1210 23:05:33.547325  279952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:05:33.571444  279952 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:05:33.571514  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.598912  279952 ssh_runner.go:195] Run: crio --version
	I1210 23:05:33.630913  279952 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 23:05:30.334341  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:32.334430  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:29.991802  278136 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:29.991901  278136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:29.991990  278136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:30.351608  278136 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:30.593176  278136 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:30.755320  278136 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:30.977407  278136 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:31.085043  278136 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:31.085216  278136 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:31.884952  278136 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:31.885114  278136 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-468067 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 23:05:32.128820  278136 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:32.281129  278136 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:33.153677  278136 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:33.153771  278136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:33.283014  278136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:33.675630  278136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:33.759625  278136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:33.814126  278136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:34.008745  278136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:34.009454  278136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:34.013938  278136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:05:33.632188  279952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:05:33.650548  279952 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:05:33.654778  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.665335  279952 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:05:33.665471  279952 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:05:33.665522  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.699300  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.699325  279952 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:05:33.699383  279952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:05:33.725754  279952 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:05:33.725775  279952 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:05:33.725784  279952 kubeadm.go:935] updating node { 192.168.76.2  8444 v1.34.2 crio true true} ...
	I1210 23:05:33.725879  279952 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-443884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:05:33.725958  279952 ssh_runner.go:195] Run: crio config
	I1210 23:05:33.773897  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:33.773919  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:33.773933  279952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:05:33.773952  279952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-443884 NodeName:default-k8s-diff-port-443884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:05:33.774070  279952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-443884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:05:33.774129  279952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:05:33.782558  279952 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:05:33.782623  279952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:05:33.790780  279952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 23:05:33.803922  279952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:05:33.819325  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 23:05:33.833524  279952 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:05:33.837539  279952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:05:33.847973  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:33.932121  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:33.960425  279952 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884 for IP: 192.168.76.2
	I1210 23:05:33.960443  279952 certs.go:195] generating shared ca certs ...
	I1210 23:05:33.960462  279952 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:33.960630  279952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:05:33.960704  279952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:05:33.960718  279952 certs.go:257] generating profile certs ...
	I1210 23:05:33.960792  279952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key
	I1210 23:05:33.960817  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt with IP's: []
	I1210 23:05:34.057077  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt ...
	I1210 23:05:34.057105  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.crt: {Name:mk51847952dee09af95f401b00c827a06f5160a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057270  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key ...
	I1210 23:05:34.057282  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key: {Name:mkf375f3b6a63380e9965a3cb09d66e6ff1b51cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.057361  279952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94
	I1210 23:05:34.057384  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 23:05:34.136636  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 ...
	I1210 23:05:34.136676  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94: {Name:mk002a91b8c9f2fb4b46891974129537a6ecfc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136847  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 ...
	I1210 23:05:34.136862  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94: {Name:mkd3d0eff1194b75939303cc097dff6606b0b6c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.136933  279952 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt
	I1210 23:05:34.137006  279952 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key
	I1210 23:05:34.137066  279952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key
	I1210 23:05:34.137081  279952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt with IP's: []
	I1210 23:05:34.220084  279952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt ...
	I1210 23:05:34.220108  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt: {Name:mka111ca179d41320378687d39fe32a1ab401271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220284  279952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key ...
	I1210 23:05:34.220298  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key: {Name:mkfd978f51ccbb0329e7bc88cc26a4c2dc6d8abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:34.220523  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:05:34.220562  279952 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:05:34.220573  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:05:34.220597  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:05:34.220621  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:05:34.220659  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:05:34.220724  279952 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:05:34.221261  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:05:34.240495  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:05:34.260518  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:05:34.278207  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:05:34.295549  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 23:05:34.313819  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:05:34.332779  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:05:34.351978  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:05:34.369453  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:05:34.389088  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:05:34.406689  279952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:05:34.423900  279952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:05:34.436918  279952 ssh_runner.go:195] Run: openssl version
	I1210 23:05:34.443077  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.451518  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:05:34.459429  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463331  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.463387  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:05:34.498849  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:05:34.506923  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:05:34.514672  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.522328  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:05:34.530594  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534511  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.534565  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:05:34.569396  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.577310  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:05:34.585012  279952 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.592934  279952 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:05:34.600629  279952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604461  279952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.604515  279952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:05:34.639297  279952 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:05:34.647330  279952 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:05:34.655251  279952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:05:34.659028  279952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:05:34.659086  279952 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:05:34.659172  279952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:05:34.659239  279952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:05:34.690714  279952 cri.go:89] found id: ""
	I1210 23:05:34.690785  279952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:05:34.699614  279952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:05:34.709093  279952 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:05:34.709144  279952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:05:34.717328  279952 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:05:34.717359  279952 kubeadm.go:158] found existing configuration files:
	
	I1210 23:05:34.717405  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 23:05:34.725308  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:05:34.725366  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:05:34.733106  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 23:05:34.741129  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:05:34.741182  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:05:34.749178  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.757226  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:05:34.757275  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:05:34.764816  279952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 23:05:34.772969  279952 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:05:34.773022  279952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:05:34.781188  279952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:05:34.830362  279952 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:05:34.830437  279952 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:05:34.853117  279952 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:05:34.853190  279952 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:05:34.853230  279952 kubeadm.go:319] OS: Linux
	I1210 23:05:34.853297  279952 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:05:34.853373  279952 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:05:34.853416  279952 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:05:34.853458  279952 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:05:34.853513  279952 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:05:34.853553  279952 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:05:34.853661  279952 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:05:34.853730  279952 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:05:34.917131  279952 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:05:34.917280  279952 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:05:34.917435  279952 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:05:34.924504  279952 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 23:05:31.528280  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:34.026219  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:34.926960  279952 out.go:252]   - Generating certificates and keys ...
	I1210 23:05:34.927084  279952 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:05:34.927196  279952 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:05:35.403022  279952 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:05:35.705371  279952 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:05:36.157799  279952 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:05:34.016209  278136 out.go:252]   - Booting up control plane ...
	I1210 23:05:34.016326  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:34.016435  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:34.017554  278136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:34.032908  278136 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:34.033076  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:34.040913  278136 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:34.041222  278136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:34.041310  278136 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:34.147564  278136 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:34.147726  278136 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:35.148682  278136 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001186459s
	I1210 23:05:35.151592  278136 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:35.151727  278136 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1210 23:05:35.151852  278136 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:35.151961  278136 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:05:37.115948  278136 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.964278263s
	I1210 23:05:37.326345  278136 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.174576485s
	I1210 23:05:38.653088  278136 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501379838s
	I1210 23:05:38.672660  278136 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:38.682162  278136 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:38.691627  278136 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:38.691817  278136 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-468067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:38.699476  278136 kubeadm.go:319] [bootstrap-token] Using token: vc7tt6.1ma2zdzjremls6oi
	I1210 23:05:36.394195  279952 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:05:36.699432  279952 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:05:36.699668  279952 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:36.853566  279952 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:05:36.853729  279952 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-443884 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 23:05:37.237894  279952 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:05:37.887346  279952 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:05:38.035256  279952 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:05:38.035414  279952 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:05:38.131597  279952 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:05:38.206508  279952 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:05:38.262108  279952 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:05:38.568290  279952 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:05:38.740049  279952 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:05:38.740793  279952 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:05:38.744608  279952 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1210 23:05:34.335263  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:36.833884  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:38.834469  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:38.701150  278136 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:38.701295  278136 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:38.704803  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:38.709973  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:38.712391  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:38.714770  278136 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:38.717330  278136 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:39.059930  278136 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:39.476535  278136 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:40.059845  278136 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:40.060904  278136 kubeadm.go:319] 
	I1210 23:05:40.061003  278136 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:40.061040  278136 kubeadm.go:319] 
	I1210 23:05:40.061181  278136 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:40.061199  278136 kubeadm.go:319] 
	I1210 23:05:40.061232  278136 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:40.061318  278136 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:40.061392  278136 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:40.061401  278136 kubeadm.go:319] 
	I1210 23:05:40.061493  278136 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:40.061510  278136 kubeadm.go:319] 
	I1210 23:05:40.061577  278136 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:40.061588  278136 kubeadm.go:319] 
	I1210 23:05:40.061670  278136 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:40.061826  278136 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:40.061923  278136 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:40.061933  278136 kubeadm.go:319] 
	I1210 23:05:40.062072  278136 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:40.062192  278136 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:40.062214  278136 kubeadm.go:319] 
	I1210 23:05:40.062308  278136 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062443  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:40.062470  278136 kubeadm.go:319] 	--control-plane 
	I1210 23:05:40.062478  278136 kubeadm.go:319] 
	I1210 23:05:40.062582  278136 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:40.062591  278136 kubeadm.go:319] 
	I1210 23:05:40.062719  278136 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vc7tt6.1ma2zdzjremls6oi \
	I1210 23:05:40.062828  278136 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:40.065627  278136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:40.065833  278136 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:40.065868  278136 cni.go:84] Creating CNI manager for ""
	I1210 23:05:40.065881  278136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:40.067426  278136 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1210 23:05:36.028634  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	W1210 23:05:38.526674  270470 pod_ready.go:104] pod "coredns-5dd5756b68-6mzkn" is not "Ready", error: <nil>
	I1210 23:05:39.026394  270470 pod_ready.go:94] pod "coredns-5dd5756b68-6mzkn" is "Ready"
	I1210 23:05:39.026418  270470 pod_ready.go:86] duration metric: took 37.006112476s for pod "coredns-5dd5756b68-6mzkn" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.029141  270470 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.032878  270470 pod_ready.go:94] pod "etcd-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.032895  270470 pod_ready.go:86] duration metric: took 3.736841ms for pod "etcd-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.035267  270470 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.039084  270470 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.039100  270470 pod_ready.go:86] duration metric: took 3.817017ms for pod "kube-apiserver-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.041365  270470 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.224222  270470 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-280530" is "Ready"
	I1210 23:05:39.224250  270470 pod_ready.go:86] duration metric: took 182.867637ms for pod "kube-controller-manager-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.425713  270470 pod_ready.go:83] waiting for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:39.824129  270470 pod_ready.go:94] pod "kube-proxy-nvgl4" is "Ready"
	I1210 23:05:39.824155  270470 pod_ready.go:86] duration metric: took 398.41578ms for pod "kube-proxy-nvgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.025046  270470 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.424982  270470 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-280530" is "Ready"
	I1210 23:05:40.425010  270470 pod_ready.go:86] duration metric: took 399.940018ms for pod "kube-scheduler-old-k8s-version-280530" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:40.425028  270470 pod_ready.go:40] duration metric: took 38.409041474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:40.471271  270470 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 23:05:40.472796  270470 out.go:203] 
	W1210 23:05:40.474173  270470 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 23:05:40.475227  270470 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 23:05:40.476535  270470 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-280530" cluster and "default" namespace by default
	I1210 23:05:38.745963  279952 out.go:252]   - Booting up control plane ...
	I1210 23:05:38.746105  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:05:38.746206  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:05:38.747825  279952 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:05:38.762756  279952 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:05:38.762924  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:05:38.769442  279952 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:05:38.769622  279952 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:05:38.769715  279952 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:05:38.869128  279952 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:05:38.869246  279952 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:05:40.369850  279952 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500867942s
	I1210 23:05:40.374332  279952 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:05:40.374482  279952 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1210 23:05:40.374711  279952 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:05:40.374834  279952 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 23:05:40.835431  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	W1210 23:05:43.334516  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:40.068553  278136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:40.073284  278136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:40.073306  278136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:40.091013  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:40.303352  278136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:40.303417  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.303441  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-468067 minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=embed-certs-468067 minikube.k8s.io/primary=true
	I1210 23:05:40.313293  278136 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:40.378089  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:40.878855  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.378845  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:41.878906  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.378433  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.878834  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.378962  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:43.879108  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:42.393467  279952 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.018399943s
	I1210 23:05:42.394503  279952 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.020217138s
	I1210 23:05:44.376449  279952 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002089254s
	I1210 23:05:44.394198  279952 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:05:44.405702  279952 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:05:44.416487  279952 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:05:44.416805  279952 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-443884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:05:44.426438  279952 kubeadm.go:319] [bootstrap-token] Using token: bdnp9h.to2dgl31xr9dkwz5
	I1210 23:05:44.379177  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:44.878480  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.378914  278136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:45.449851  278136 kubeadm.go:1114] duration metric: took 5.14650104s to wait for elevateKubeSystemPrivileges
	I1210 23:05:45.449886  278136 kubeadm.go:403] duration metric: took 15.698207011s to StartCluster
	I1210 23:05:45.450011  278136 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.450102  278136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:45.452199  278136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:45.452484  278136 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:45.452632  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:45.453102  278136 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:45.453099  278136 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:45.453199  278136 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-468067"
	I1210 23:05:45.453231  278136 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-468067"
	I1210 23:05:45.453261  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.453287  278136 addons.go:70] Setting default-storageclass=true in profile "embed-certs-468067"
	I1210 23:05:45.453309  278136 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-468067"
	I1210 23:05:45.453723  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454265  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.454717  278136 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:45.457422  278136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:45.486553  278136 addons.go:239] Setting addon default-storageclass=true in "embed-certs-468067"
	I1210 23:05:45.486718  278136 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:05:45.487325  278136 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:05:45.490135  278136 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:44.428776  279952 out.go:252]   - Configuring RBAC rules ...
	I1210 23:05:44.428945  279952 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:05:44.431774  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:05:44.437409  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:05:44.441061  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:05:44.443828  279952 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:05:44.447026  279952 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:05:44.782438  279952 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:05:45.200076  279952 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:05:45.782497  279952 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:05:45.783786  279952 kubeadm.go:319] 
	I1210 23:05:45.783890  279952 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:05:45.783902  279952 kubeadm.go:319] 
	I1210 23:05:45.783990  279952 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:05:45.783998  279952 kubeadm.go:319] 
	I1210 23:05:45.784039  279952 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:05:45.784112  279952 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:05:45.784188  279952 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:05:45.784204  279952 kubeadm.go:319] 
	I1210 23:05:45.784312  279952 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:05:45.784331  279952 kubeadm.go:319] 
	I1210 23:05:45.784396  279952 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:05:45.784406  279952 kubeadm.go:319] 
	I1210 23:05:45.784469  279952 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:05:45.784575  279952 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:05:45.784730  279952 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:05:45.784744  279952 kubeadm.go:319] 
	I1210 23:05:45.784874  279952 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:05:45.784977  279952 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:05:45.784989  279952 kubeadm.go:319] 
	I1210 23:05:45.785081  279952 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785190  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:05:45.785217  279952 kubeadm.go:319] 	--control-plane 
	I1210 23:05:45.785226  279952 kubeadm.go:319] 
	I1210 23:05:45.785345  279952 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:05:45.785356  279952 kubeadm.go:319] 
	I1210 23:05:45.785453  279952 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token bdnp9h.to2dgl31xr9dkwz5 \
	I1210 23:05:45.785567  279952 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:05:45.788874  279952 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:05:45.789027  279952 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:05:45.789056  279952 cni.go:84] Creating CNI manager for ""
	I1210 23:05:45.789085  279952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:05:45.790618  279952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:05:45.492042  278136 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.492059  278136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:45.492115  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.519499  278136 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.519528  278136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:45.519625  278136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:05:45.523139  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.543799  278136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:05:45.561861  278136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:45.619261  278136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:45.642303  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:45.661850  278136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:45.731298  278136 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:45.732304  278136 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:05:45.961001  278136 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:45.791839  279952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:05:45.796562  279952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:05:45.796582  279952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:05:45.811119  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:05:46.030619  279952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:05:46.030699  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:46.030765  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-443884 minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=default-k8s-diff-port-443884 minikube.k8s.io/primary=true
	I1210 23:05:46.041384  279952 ops.go:34] apiserver oom_adj: -16
	I1210 23:05:46.113000  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1210 23:05:45.334950  273929 pod_ready.go:104] pod "coredns-7d764666f9-5tpb8" is not "Ready", error: <nil>
	I1210 23:05:45.834730  273929 pod_ready.go:94] pod "coredns-7d764666f9-5tpb8" is "Ready"
	I1210 23:05:45.834762  273929 pod_ready.go:86] duration metric: took 31.506416988s for pod "coredns-7d764666f9-5tpb8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.837911  273929 pod_ready.go:83] waiting for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.842136  273929 pod_ready.go:94] pod "etcd-no-preload-092439" is "Ready"
	I1210 23:05:45.842157  273929 pod_ready.go:86] duration metric: took 4.230953ms for pod "etcd-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.845582  273929 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.849432  273929 pod_ready.go:94] pod "kube-apiserver-no-preload-092439" is "Ready"
	I1210 23:05:45.849453  273929 pod_ready.go:86] duration metric: took 3.846386ms for pod "kube-apiserver-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:45.851434  273929 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.033192  273929 pod_ready.go:94] pod "kube-controller-manager-no-preload-092439" is "Ready"
	I1210 23:05:46.033224  273929 pod_ready.go:86] duration metric: took 181.767834ms for pod "kube-controller-manager-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.232384  273929 pod_ready.go:83] waiting for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.632419  273929 pod_ready.go:94] pod "kube-proxy-gqz42" is "Ready"
	I1210 23:05:46.632450  273929 pod_ready.go:86] duration metric: took 400.040431ms for pod "kube-proxy-gqz42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:46.832502  273929 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232861  273929 pod_ready.go:94] pod "kube-scheduler-no-preload-092439" is "Ready"
	I1210 23:05:47.232892  273929 pod_ready.go:86] duration metric: took 400.366591ms for pod "kube-scheduler-no-preload-092439" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:47.232908  273929 pod_ready.go:40] duration metric: took 32.909358343s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:47.280508  273929 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:05:47.281991  273929 out.go:179] * Done! kubectl is now configured to use "no-preload-092439" cluster and "default" namespace by default
	I1210 23:05:45.962276  278136 addons.go:530] duration metric: took 509.17689ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:05:46.235747  278136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-468067" context rescaled to 1 replicas
	W1210 23:05:47.735830  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:46.613910  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.113080  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:47.613875  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.113232  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:48.613174  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.113224  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:49.613917  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.113829  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.613873  279952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:05:50.685511  279952 kubeadm.go:1114] duration metric: took 4.654880357s to wait for elevateKubeSystemPrivileges
	I1210 23:05:50.685559  279952 kubeadm.go:403] duration metric: took 16.026470518s to StartCluster
	I1210 23:05:50.685582  279952 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.685709  279952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:05:50.687466  279952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:05:50.687720  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:05:50.687732  279952 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:05:50.687802  279952 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:05:50.687909  279952 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687931  279952 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.687946  279952 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-443884"
	I1210 23:05:50.687960  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.687976  279952 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-443884"
	I1210 23:05:50.687954  279952 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:05:50.688332  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.688484  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.689939  279952 out.go:179] * Verifying Kubernetes components...
	I1210 23:05:50.691358  279952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:05:50.711362  279952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:05:50.712612  279952 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.712632  279952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:05:50.712715  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.712803  279952 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-443884"
	I1210 23:05:50.712847  279952 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:05:50.713267  279952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:05:50.743749  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.746417  279952 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.746441  279952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:05:50.746494  279952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:05:50.770270  279952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:05:50.774861  279952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:05:50.833407  279952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:05:50.857046  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:05:50.880889  279952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:05:50.958517  279952 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 23:05:50.959713  279952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:05:51.167487  279952 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:05:51.168805  279952 addons.go:530] duration metric: took 481.002504ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1210 23:05:49.737595  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	W1210 23:05:52.236343  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:51.462493  279952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-443884" context rescaled to 1 replicas
	W1210 23:05:52.964002  279952 node_ready.go:57] node "default-k8s-diff-port-443884" has "Ready":"False" status (will retry)
	W1210 23:05:55.463440  279952 node_ready.go:57] node "default-k8s-diff-port-443884" has "Ready":"False" status (will retry)
	W1210 23:05:54.735737  278136 node_ready.go:57] node "embed-certs-468067" has "Ready":"False" status (will retry)
	I1210 23:05:56.235232  278136 node_ready.go:49] node "embed-certs-468067" is "Ready"
	I1210 23:05:56.235259  278136 node_ready.go:38] duration metric: took 10.502925964s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:05:56.235273  278136 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:05:56.235321  278136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:05:56.247302  278136 api_server.go:72] duration metric: took 10.794783092s to wait for apiserver process to appear ...
	I1210 23:05:56.247337  278136 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:05:56.247372  278136 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:05:56.251592  278136 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 23:05:56.252622  278136 api_server.go:141] control plane version: v1.34.2
	I1210 23:05:56.252656  278136 api_server.go:131] duration metric: took 5.311301ms to wait for apiserver health ...
	I1210 23:05:56.252668  278136 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:05:56.256020  278136 system_pods.go:59] 8 kube-system pods found
	I1210 23:05:56.256061  278136 system_pods.go:61] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:05:56.256077  278136 system_pods.go:61] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:56.256084  278136 system_pods.go:61] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:56.256100  278136 system_pods.go:61] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:56.256111  278136 system_pods.go:61] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:56.256117  278136 system_pods.go:61] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:56.256124  278136 system_pods.go:61] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:56.256130  278136 system_pods.go:61] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:05:56.256141  278136 system_pods.go:74] duration metric: took 3.465092ms to wait for pod list to return data ...
	I1210 23:05:56.256153  278136 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:05:56.258467  278136 default_sa.go:45] found service account: "default"
	I1210 23:05:56.258487  278136 default_sa.go:55] duration metric: took 2.326823ms for default service account to be created ...
	I1210 23:05:56.258494  278136 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:05:56.261153  278136 system_pods.go:86] 8 kube-system pods found
	I1210 23:05:56.261177  278136 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:05:56.261182  278136 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:56.261188  278136 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:56.261193  278136 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:56.261198  278136 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:56.261203  278136 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:56.261208  278136 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:56.261215  278136 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:05:56.261238  278136 retry.go:31] will retry after 290.183983ms: missing components: kube-dns
	I1210 23:05:56.556212  278136 system_pods.go:86] 8 kube-system pods found
	I1210 23:05:56.556258  278136 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:05:56.556272  278136 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:56.556281  278136 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:56.556287  278136 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:56.556293  278136 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:56.556298  278136 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:56.556308  278136 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:56.556313  278136 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running
	I1210 23:05:56.556329  278136 retry.go:31] will retry after 349.591161ms: missing components: kube-dns
	I1210 23:05:56.910127  278136 system_pods.go:86] 8 kube-system pods found
	I1210 23:05:56.910185  278136 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:05:56.910195  278136 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:56.910202  278136 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:56.910207  278136 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:56.910214  278136 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:56.910220  278136 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:56.910226  278136 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:56.910232  278136 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running
	I1210 23:05:56.910253  278136 retry.go:31] will retry after 307.131209ms: missing components: kube-dns
	I1210 23:05:57.221242  278136 system_pods.go:86] 8 kube-system pods found
	I1210 23:05:57.221272  278136 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:05:57.221279  278136 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:57.221285  278136 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:57.221289  278136 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:57.221293  278136 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:57.221296  278136 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:57.221300  278136 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:57.221303  278136 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running
	I1210 23:05:57.221322  278136 retry.go:31] will retry after 544.9241ms: missing components: kube-dns
	I1210 23:05:57.771102  278136 system_pods.go:86] 8 kube-system pods found
	I1210 23:05:57.771147  278136 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Running
	I1210 23:05:57.771158  278136 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running
	I1210 23:05:57.771165  278136 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running
	I1210 23:05:57.771171  278136 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running
	I1210 23:05:57.771177  278136 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running
	I1210 23:05:57.771183  278136 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running
	I1210 23:05:57.771189  278136 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running
	I1210 23:05:57.771198  278136 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running
	I1210 23:05:57.771209  278136 system_pods.go:126] duration metric: took 1.512708248s to wait for k8s-apps to be running ...
	I1210 23:05:57.771223  278136 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:05:57.771275  278136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:05:57.786988  278136 system_svc.go:56] duration metric: took 15.753577ms WaitForService to wait for kubelet
	I1210 23:05:57.787013  278136 kubeadm.go:587] duration metric: took 12.334502289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:05:57.787031  278136 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:05:57.790228  278136 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:05:57.790259  278136 node_conditions.go:123] node cpu capacity is 8
	I1210 23:05:57.790278  278136 node_conditions.go:105] duration metric: took 3.24177ms to run NodePressure ...
	I1210 23:05:57.790292  278136 start.go:242] waiting for startup goroutines ...
	I1210 23:05:57.790304  278136 start.go:247] waiting for cluster config update ...
	I1210 23:05:57.790318  278136 start.go:256] writing updated cluster config ...
	I1210 23:05:57.790682  278136 ssh_runner.go:195] Run: rm -f paused
	I1210 23:05:57.794844  278136 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:57.798772  278136 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qw48c" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.803254  278136 pod_ready.go:94] pod "coredns-66bc5c9577-qw48c" is "Ready"
	I1210 23:05:57.803278  278136 pod_ready.go:86] duration metric: took 4.484471ms for pod "coredns-66bc5c9577-qw48c" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.805284  278136 pod_ready.go:83] waiting for pod "etcd-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.809081  278136 pod_ready.go:94] pod "etcd-embed-certs-468067" is "Ready"
	I1210 23:05:57.809100  278136 pod_ready.go:86] duration metric: took 3.799915ms for pod "etcd-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.811020  278136 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.814685  278136 pod_ready.go:94] pod "kube-apiserver-embed-certs-468067" is "Ready"
	I1210 23:05:57.814710  278136 pod_ready.go:86] duration metric: took 3.666501ms for pod "kube-apiserver-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:57.816866  278136 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:58.199459  278136 pod_ready.go:94] pod "kube-controller-manager-embed-certs-468067" is "Ready"
	I1210 23:05:58.199489  278136 pod_ready.go:86] duration metric: took 382.602844ms for pod "kube-controller-manager-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:58.399743  278136 pod_ready.go:83] waiting for pod "kube-proxy-27pft" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:58.800145  278136 pod_ready.go:94] pod "kube-proxy-27pft" is "Ready"
	I1210 23:05:58.800169  278136 pod_ready.go:86] duration metric: took 400.406129ms for pod "kube-proxy-27pft" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:58.999915  278136 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:59.399724  278136 pod_ready.go:94] pod "kube-scheduler-embed-certs-468067" is "Ready"
	I1210 23:05:59.399747  278136 pod_ready.go:86] duration metric: took 399.801436ms for pod "kube-scheduler-embed-certs-468067" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:05:59.399763  278136 pod_ready.go:40] duration metric: took 1.604883192s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:05:59.446359  278136 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:05:59.448962  278136 out.go:179] * Done! kubectl is now configured to use "embed-certs-468067" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.449131035Z" level=info msg="Created container efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk/kubernetes-dashboard" id=a95d98d6-1a14-4757-8074-88dcccee03e7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.449922454Z" level=info msg="Starting container: efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5" id=dc7b4008-525d-44ab-ab09-dce83fb8cb7b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.452063018Z" level=info msg="Started container" PID=1719 containerID=efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5 description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk/kubernetes-dashboard id=dc7b4008-525d-44ab-ab09-dce83fb8cb7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e4f96b27f0688b3b03dbd68556b8ed16e974d13f00c8856a933c65962d1524a
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.897523122Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=856d9104-2ce0-49fb-962e-a22a60e24933 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.900699541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=386f50f1-aca6-4b02-98ff-8ed3db9510b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.903439152Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=d02b9591-cd04-4498-97b2-d036d23b3c8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.903562162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.910531409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.911039772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.938844442Z" level=info msg="Created container 4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=d02b9591-cd04-4498-97b2-d036d23b3c8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.939483243Z" level=info msg="Starting container: 4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4" id=336e27ca-f1a2-4f56-b4b2-b3465d2b5b79 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.942216884Z" level=info msg="Started container" PID=1737 containerID=4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper id=336e27ca-f1a2-4f56-b4b2-b3465d2b5b79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09da54d6dde5201f3c2df23881290f76e99895e199c6f8c496173ad767b18ac3
	Dec 10 23:05:35 no-preload-092439 crio[565]: time="2025-12-10T23:05:35.000233777Z" level=info msg="Removing container: cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a" id=9baaf368-fd1d-4dde-8e25-3262ba9bcdef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:35 no-preload-092439 crio[565]: time="2025-12-10T23:05:35.010439003Z" level=info msg="Removed container cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=9baaf368-fd1d-4dde-8e25-3262ba9bcdef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.024490731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49218e37-1e66-441e-8e52-8476627a2a78 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.025540016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bcd00f96-14ce-44fe-bf76-caea55841ea8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.026631462Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=be335b1b-d4b8-4502-9a4e-ec381e18f4eb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.02678351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031196864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031374379Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca728880da80faf1c8b0614d01ff15bed541abed6852b0736fb2c2a38694e85f/merged/etc/passwd: no such file or directory"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031410388Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca728880da80faf1c8b0614d01ff15bed541abed6852b0736fb2c2a38694e85f/merged/etc/group: no such file or directory"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.03184585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.059405341Z" level=info msg="Created container db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00: kube-system/storage-provisioner/storage-provisioner" id=be335b1b-d4b8-4502-9a4e-ec381e18f4eb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.060059602Z" level=info msg="Starting container: db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00" id=79fe71eb-f2fc-4ee6-90b4-3a9636f2df6e name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.062189317Z" level=info msg="Started container" PID=1753 containerID=db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00 description=kube-system/storage-provisioner/storage-provisioner id=79fe71eb-f2fc-4ee6-90b4-3a9636f2df6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc37a4c29e3836b2719e63650de5284f6f3af27aded1a2b393d434a51f938c18
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	db89ec35a1bd7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   bc37a4c29e383       storage-provisioner                          kube-system
	4da3c51cf46fa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   09da54d6dde52       dashboard-metrics-scraper-867fb5f87b-xdj7x   kubernetes-dashboard
	efaff6f1c0d44       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   2e4f96b27f068       kubernetes-dashboard-b84665fb8-6jlnk         kubernetes-dashboard
	356f2cafeda33       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   1dbe084bcabac       busybox                                      default
	3bf4f7155c432       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   5ce7a2f2982f5       kindnet-k4tzd                                kube-system
	c3ef0ffa9ede8       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   df39d7191ed05       coredns-7d764666f9-5tpb8                     kube-system
	8bc13140b3261       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   bc37a4c29e383       storage-provisioner                          kube-system
	8173be3b7c05b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           48 seconds ago      Running             kube-proxy                  0                   b988469fc2627       kube-proxy-gqz42                             kube-system
	6ee26bc7c96ed       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           50 seconds ago      Running             kube-controller-manager     0                   3b9af0adb9fc2       kube-controller-manager-no-preload-092439    kube-system
	47d48a88aaf2f       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           50 seconds ago      Running             kube-apiserver              0                   c11379ffe99d1       kube-apiserver-no-preload-092439             kube-system
	017238cc878d0       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           50 seconds ago      Running             kube-scheduler              0                   4e1f3673e53a8       kube-scheduler-no-preload-092439             kube-system
	9e0d3af710c80       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           50 seconds ago      Running             etcd                        0                   cd2393da550a6       etcd-no-preload-092439                       kube-system
	
	
	==> coredns [c3ef0ffa9ede88313d5564b45adfa71559c2b439d058f7d23b302fa80b482168] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47905 - 4035 "HINFO IN 6110967146731182700.5495061107359195155. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018571511s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-092439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-092439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=no-preload-092439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_04_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-092439
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-092439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bf869612-dadc-4e0f-a9d5-5bc2846c3b03
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-7d764666f9-5tpb8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-no-preload-092439                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-k4tzd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-no-preload-092439              250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-no-preload-092439     200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-gqz42                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-no-preload-092439              100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-xdj7x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6jlnk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  102s  node-controller  Node no-preload-092439 event: Registered Node no-preload-092439 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node no-preload-092439 event: Registered Node no-preload-092439 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [9e0d3af710c80ebeda3c5932ad2b93927ce199ee5ed52ebdded84495b7ed024b] <==
	{"level":"warn","ts":"2025-12-10T23:05:12.359905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.367821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.374419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.418759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:18.188021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.273867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316548 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:532 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:18.188148Z","caller":"traceutil/trace.go:172","msg":"trace[1219693578] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"309.026663ms","start":"2025-12-10T23:05:17.879108Z","end":"2025-12-10T23:05:18.188135Z","steps":["trace[1219693578] 'process raft request'  (duration: 63.09679ms)","trace[1219693578] 'compare'  (duration: 245.18141ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:18.188202Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T23:05:17.879082Z","time spent":"309.094609ms","remote":"127.0.0.1:55882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":690,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:532 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >"}
	{"level":"info","ts":"2025-12-10T23:05:18.400920Z","caller":"traceutil/trace.go:172","msg":"trace[2121665406] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"174.688177ms","start":"2025-12-10T23:05:18.226203Z","end":"2025-12-10T23:05:18.400891Z","steps":["trace[2121665406] 'process raft request'  (duration: 174.540101ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:18.603406Z","caller":"traceutil/trace.go:172","msg":"trace[1791242683] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"198.109934ms","start":"2025-12-10T23:05:18.405267Z","end":"2025-12-10T23:05:18.603377Z","steps":["trace[1791242683] 'process raft request'  (duration: 99.19311ms)","trace[1791242683] 'compare'  (duration: 98.787761ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:18.788694Z","caller":"traceutil/trace.go:172","msg":"trace[506927479] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"131.562814ms","start":"2025-12-10T23:05:18.657110Z","end":"2025-12-10T23:05:18.788673Z","steps":["trace[506927479] 'process raft request'  (duration: 131.399194ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.002804Z","caller":"traceutil/trace.go:172","msg":"trace[1236189220] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"116.523882ms","start":"2025-12-10T23:05:18.886259Z","end":"2025-12-10T23:05:19.002783Z","steps":["trace[1236189220] 'process raft request'  (duration: 116.356522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:05:19.248573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.767969ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316576 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:544 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:19.248666Z","caller":"traceutil/trace.go:172","msg":"trace[146087888] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"239.510516ms","start":"2025-12-10T23:05:19.009126Z","end":"2025-12-10T23:05:19.248636Z","steps":["trace[146087888] 'process raft request'  (duration: 114.623404ms)","trace[146087888] 'compare'  (duration: 124.662196ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:19.530973Z","caller":"traceutil/trace.go:172","msg":"trace[1386041226] linearizableReadLoop","detail":"{readStateIndex:580; appliedIndex:580; }","duration":"200.452034ms","start":"2025-12-10T23:05:19.330501Z","end":"2025-12-10T23:05:19.530953Z","steps":["trace[1386041226] 'read index received'  (duration: 200.423225ms)","trace[1386041226] 'applied index is now lower than readState.Index'  (duration: 27.746µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:19.535356Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.836219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-5tpb8\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-10T23:05:19.535412Z","caller":"traceutil/trace.go:172","msg":"trace[1894048739] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-5tpb8; range_end:; response_count:1; response_revision:549; }","duration":"204.903121ms","start":"2025-12-10T23:05:19.330495Z","end":"2025-12-10T23:05:19.535398Z","steps":["trace[1894048739] 'agreement among raft nodes before linearized reading'  (duration: 200.510876ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.535465Z","caller":"traceutil/trace.go:172","msg":"trace[1047098104] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"245.294545ms","start":"2025-12-10T23:05:19.290153Z","end":"2025-12-10T23:05:19.535448Z","steps":["trace[1047098104] 'process raft request'  (duration: 240.834663ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:05:20.007104Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.737085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-5tpb8\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-10T23:05:20.007204Z","caller":"traceutil/trace.go:172","msg":"trace[385365805] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-5tpb8; range_end:; response_count:1; response_revision:558; }","duration":"176.848191ms","start":"2025-12-10T23:05:19.830340Z","end":"2025-12-10T23:05:20.007188Z","steps":["trace[385365805] 'agreement among raft nodes before linearized reading'  (duration: 45.180264ms)","trace[385365805] 'range keys from in-memory index tree'  (duration: 131.381865ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:20.007163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.513935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316598 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-no-preload-092439.187ffd244e6918cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-no-preload-092439.187ffd244e6918cf\" value_size:763 lease:6571766750712316445 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:20.007432Z","caller":"traceutil/trace.go:172","msg":"trace[729272160] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"257.193702ms","start":"2025-12-10T23:05:19.750223Z","end":"2025-12-10T23:05:20.007417Z","steps":["trace[729272160] 'process raft request'  (duration: 125.357933ms)","trace[729272160] 'compare'  (duration: 131.192697ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:20.123881Z","caller":"traceutil/trace.go:172","msg":"trace[1615713273] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"111.107509ms","start":"2025-12-10T23:05:20.012755Z","end":"2025-12-10T23:05:20.123862Z","steps":["trace[1615713273] 'process raft request'  (duration: 94.219469ms)","trace[1615713273] 'compare'  (duration: 16.78465ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:20.123901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.003507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-092439\" limit:1 ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2025-12-10T23:05:20.123959Z","caller":"traceutil/trace.go:172","msg":"trace[1559935575] range","detail":"{range_begin:/registry/minions/no-preload-092439; range_end:; response_count:1; response_revision:559; }","duration":"110.072045ms","start":"2025-12-10T23:05:20.013867Z","end":"2025-12-10T23:05:20.123940Z","steps":["trace[1559935575] 'agreement among raft nodes before linearized reading'  (duration: 93.062123ms)","trace[1559935575] 'range keys from in-memory index tree'  (duration: 16.843384ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:25.181877Z","caller":"traceutil/trace.go:172","msg":"trace[1690121058] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"207.625505ms","start":"2025-12-10T23:05:24.974227Z","end":"2025-12-10T23:05:25.181853Z","steps":["trace[1690121058] 'process raft request'  (duration: 145.655078ms)","trace[1690121058] 'compare'  (duration: 61.85062ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:06:01 up 48 min,  0 user,  load average: 4.45, 2.90, 1.86
	Linux no-preload-092439 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3bf4f7155c432603b41a6c12c2954315b96cd1a34c84a2e13f9a7a39e46ef3cd] <==
	I1210 23:05:13.641580       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:13.676710       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 23:05:13.677000       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:13.677058       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:13.677092       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:13.942217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:13.976808       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:13.976850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:14.076951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:14.282509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:14.282603       1 metrics.go:72] Registering metrics
	I1210 23:05:14.282741       1 controller.go:711] "Syncing nftables rules"
	I1210 23:05:23.942929       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:23.942964       1 main.go:301] handling current node
	I1210 23:05:33.944773       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:33.944847       1 main.go:301] handling current node
	I1210 23:05:43.942640       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:43.942690       1 main.go:301] handling current node
	I1210 23:05:53.947130       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:53.947167       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47d48a88aaf2f336aaf052c8e06ba295472eb4a8dc9582731814742da2d715a2] <==
	I1210 23:05:12.927410       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:12.927466       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 23:05:12.927702       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:05:12.927714       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:05:12.927720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:05:12.927726       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:05:12.935298       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:05:12.935335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:05:12.939315       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:12.939336       1 policy_source.go:248] refreshing policies
	E1210 23:05:12.940567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:05:12.963820       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:12.965782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:13.080201       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:05:13.402892       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:05:13.457245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:05:13.496295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:13.512885       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:13.616810       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.69.135"}
	I1210 23:05:13.638096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.27.37"}
	I1210 23:05:13.822985       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:05:16.516992       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:05:16.618507       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:05:16.667008       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:05:16.667008       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6ee26bc7c96ed586eff3850cfe0f16397254e657370bfce96dc19153353ccd40] <==
	I1210 23:05:16.078866       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079404       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079434       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079694       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.080369       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.080455       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:16.081233       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081333       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081350       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.082944       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081370       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081363       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.083171       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.085726       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.085748       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087417       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087444       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087798       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.088011       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.088727       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090270       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090281       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090287       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:05:16.090293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:05:16.181539       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8173be3b7c05b175f4824b0b205d6e0ac2d5ea31cc37448e3cf92b819a82793d] <==
	I1210 23:05:13.486799       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:05:13.604819       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:13.705529       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:13.705571       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 23:05:13.705714       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:05:13.729694       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:13.729746       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:05:13.735731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:05:13.736136       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:05:13.736159       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:13.737535       1 config.go:200] "Starting service config controller"
	I1210 23:05:13.737950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:05:13.737660       1 config.go:309] "Starting node config controller"
	I1210 23:05:13.738072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:05:13.738121       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:05:13.738042       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:05:13.738182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:05:13.738029       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:05:13.738231       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:05:13.838983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:05:13.839012       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:05:13.839012       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [017238cc878d0c921bda71833bcc5f0f7afe24f51f551351e5cf67faa077db1e] <==
	I1210 23:05:11.634228       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:05:12.841626       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:05:12.841674       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:05:12.841687       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:05:12.841696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:05:12.899158       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 23:05:12.899206       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:12.903040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:05:12.903073       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:12.905081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:05:12.905135       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:05:13.004164       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:05:24 no-preload-092439 kubelet[710]: E1210 23:05:24.967370     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-092439" containerName="kube-scheduler"
	Dec 10 23:05:24 no-preload-092439 kubelet[710]: E1210 23:05:24.967446     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.176838     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-092439" containerName="etcd"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.980151     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" containerName="kubernetes-dashboard"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.980268     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-092439" containerName="etcd"
	Dec 10 23:05:29 no-preload-092439 kubelet[710]: E1210 23:05:29.982787     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" containerName="kubernetes-dashboard"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: E1210 23:05:30.213032     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: I1210 23:05:30.213071     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: E1210 23:05:30.213265     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.896927     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.896971     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.998938     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.999172     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.999204     710 scope.go:122] "RemoveContainer" containerID="4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.999377     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:35 no-preload-092439 kubelet[710]: I1210 23:05:35.011468     710 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" podStartSLOduration=8.156583477 podStartE2EDuration="19.011448354s" podCreationTimestamp="2025-12-10 23:05:16 +0000 UTC" firstStartedPulling="2025-12-10 23:05:17.554018187 +0000 UTC m=+6.779532057" lastFinishedPulling="2025-12-10 23:05:28.408883068 +0000 UTC m=+17.634396934" observedRunningTime="2025-12-10 23:05:28.992124293 +0000 UTC m=+18.217638179" watchObservedRunningTime="2025-12-10 23:05:35.011448354 +0000 UTC m=+24.236962243"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: E1210 23:05:40.212785     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: I1210 23:05:40.212825     710 scope.go:122] "RemoveContainer" containerID="4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: E1210 23:05:40.213047     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:44 no-preload-092439 kubelet[710]: I1210 23:05:44.024086     710 scope.go:122] "RemoveContainer" containerID="8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714"
	Dec 10 23:05:45 no-preload-092439 kubelet[710]: E1210 23:05:45.444611     710 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5tpb8" containerName="coredns"
	Dec 10 23:05:59 no-preload-092439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:05:59 no-preload-092439 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:05:59 no-preload-092439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:05:59 no-preload-092439 systemd[1]: kubelet.service: Consumed 1.684s CPU time.
	
	
	==> kubernetes-dashboard [efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5] <==
	2025/12/10 23:05:28 Starting overwatch
	2025/12/10 23:05:28 Using namespace: kubernetes-dashboard
	2025/12/10 23:05:28 Using in-cluster config to connect to apiserver
	2025/12/10 23:05:28 Using secret token for csrf signing
	2025/12/10 23:05:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:05:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:05:28 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/10 23:05:28 Generating JWE encryption key
	2025/12/10 23:05:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:05:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:05:28 Initializing JWE encryption key from synchronized object
	2025/12/10 23:05:28 Creating in-cluster Sidecar client
	2025/12/10 23:05:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:28 Serving insecurely on HTTP port: 9090
	2025/12/10 23:05:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714] <==
	I1210 23:05:13.373604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:05:43.382041       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00] <==
	I1210 23:05:44.074754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:05:44.081922       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:05:44.081963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:05:44.084049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:47.538995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:51.799518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:55.398271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:58.452108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.475522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.480394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:01.480580       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:06:01.480668       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e18e7f7-4d20-4032-b0ec-4af7afe85afe", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19 became leader
	I1210 23:06:01.480701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19!
	W1210 23:06:01.485560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.491744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:01.581913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-092439 -n no-preload-092439: exit status 2 (375.606727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-092439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-092439
helpers_test.go:244: (dbg) docker inspect no-preload-092439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	        "Created": "2025-12-10T23:03:49.807359238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:05:04.10216554Z",
	            "FinishedAt": "2025-12-10T23:05:03.161555362Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hostname",
	        "HostsPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/hosts",
	        "LogPath": "/var/lib/docker/containers/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213/08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213-json.log",
	        "Name": "/no-preload-092439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-092439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-092439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08ed46fd1dff96bb2e0e372a92b4215d02ee25bc6dc4bf774ed4f8af1a36b213",
	                "LowerDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f162432b4338212263af09f7bfb528fdb3a4747a336c6adc736423ecc0d8eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-092439",
	                "Source": "/var/lib/docker/volumes/no-preload-092439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-092439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-092439",
	                "name.minikube.sigs.k8s.io": "no-preload-092439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b50e25863e078713840c184013bdc1b5c9b6fc28f353f6b29581045492112b5f",
	            "SandboxKey": "/var/run/docker/netns/b50e25863e07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-092439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9adf045f08f3157cc4b3a22d4d1229edfd6c1e8d22978b4ef7f6f7a0d83df92c",
	                    "EndpointID": "63a2d812f7cd710c1e1dbda450fea4335735f6489607e8111d6fb3806be2545f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:95:0b:1e:01:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-092439",
	                        "08ed46fd1dff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439: exit status 2 (350.983005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-092439 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-092439 logs -n 25: (1.216752609s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p NoKubernetes-508535 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                              │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │                     │
	│ delete  │ -p NoKubernetes-508535                                                                                                                                                                                                                               │ NoKubernetes-508535          │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:03 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                         │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                            │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                                      │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:01.642216  288470 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:01.642541  288470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:01.642553  288470 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:01.642558  288470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:01.642793  288470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:01.643346  288470 out.go:368] Setting JSON to false
	I1210 23:06:01.644756  288470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1765405058,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:01.644821  288470 start.go:143] virtualization: kvm guest
	I1210 23:06:01.646925  288470 out.go:179] * [newest-cni-852445] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:01.648063  288470 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:01.648103  288470 notify.go:221] Checking for updates...
	I1210 23:06:01.650395  288470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:01.651669  288470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:01.653457  288470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:01.654993  288470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:01.656390  288470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:01.658190  288470 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:01.658422  288470 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:01.658613  288470 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:01.658745  288470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:01.685746  288470 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:01.685917  288470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:01.753389  288470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 23:06:01.741287277 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:01.753543  288470 docker.go:319] overlay module found
	I1210 23:06:01.755674  288470 out.go:179] * Using the docker driver based on user configuration
	I1210 23:06:01.757237  288470 start.go:309] selected driver: docker
	I1210 23:06:01.757256  288470 start.go:927] validating driver "docker" against <nil>
	I1210 23:06:01.757272  288470 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:01.757963  288470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:01.831341  288470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 23:06:01.821067975 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:01.831531  288470 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	W1210 23:06:01.831579  288470 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 23:06:01.831860  288470 start_flags.go:1150] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 23:06:01.834212  288470 out.go:179] * Using Docker driver with root privileges
	I1210 23:06:01.835361  288470 cni.go:84] Creating CNI manager for ""
	I1210 23:06:01.835435  288470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:01.835476  288470 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:06:01.835579  288470 start.go:353] cluster config:
	{Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:01.837185  288470 out.go:179] * Starting "newest-cni-852445" primary control-plane node in "newest-cni-852445" cluster
	I1210 23:06:01.838397  288470 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:01.839748  288470 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:01.840857  288470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:06:01.840894  288470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:01.840913  288470 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:01.840984  288470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:01.841040  288470 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:01.841054  288470 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 23:06:01.841152  288470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json ...
	I1210 23:06:01.841173  288470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json: {Name:mk43e6d471aefff014120e3637224bdb2b726b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:01.863520  288470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:01.863541  288470 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:01.863566  288470 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:01.863601  288470 start.go:360] acquireMachinesLock for newest-cni-852445: {Name:mk113c2684b04f857c1d54dc6179d89c7f0645fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:01.864050  288470 start.go:364] duration metric: took 423.463µs to acquireMachinesLock for "newest-cni-852445"
	I1210 23:06:01.864096  288470 start.go:93] Provisioning new machine with config: &{Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:01.864190  288470 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.449131035Z" level=info msg="Created container efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5: kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk/kubernetes-dashboard" id=a95d98d6-1a14-4757-8074-88dcccee03e7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.449922454Z" level=info msg="Starting container: efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5" id=dc7b4008-525d-44ab-ab09-dce83fb8cb7b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:28 no-preload-092439 crio[565]: time="2025-12-10T23:05:28.452063018Z" level=info msg="Started container" PID=1719 containerID=efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5 description=kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk/kubernetes-dashboard id=dc7b4008-525d-44ab-ab09-dce83fb8cb7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e4f96b27f0688b3b03dbd68556b8ed16e974d13f00c8856a933c65962d1524a
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.897523122Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=856d9104-2ce0-49fb-962e-a22a60e24933 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.900699541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=386f50f1-aca6-4b02-98ff-8ed3db9510b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.903439152Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=d02b9591-cd04-4498-97b2-d036d23b3c8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.903562162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.910531409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.911039772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.938844442Z" level=info msg="Created container 4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=d02b9591-cd04-4498-97b2-d036d23b3c8e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.939483243Z" level=info msg="Starting container: 4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4" id=336e27ca-f1a2-4f56-b4b2-b3465d2b5b79 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:34 no-preload-092439 crio[565]: time="2025-12-10T23:05:34.942216884Z" level=info msg="Started container" PID=1737 containerID=4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper id=336e27ca-f1a2-4f56-b4b2-b3465d2b5b79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09da54d6dde5201f3c2df23881290f76e99895e199c6f8c496173ad767b18ac3
	Dec 10 23:05:35 no-preload-092439 crio[565]: time="2025-12-10T23:05:35.000233777Z" level=info msg="Removing container: cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a" id=9baaf368-fd1d-4dde-8e25-3262ba9bcdef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:35 no-preload-092439 crio[565]: time="2025-12-10T23:05:35.010439003Z" level=info msg="Removed container cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x/dashboard-metrics-scraper" id=9baaf368-fd1d-4dde-8e25-3262ba9bcdef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.024490731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49218e37-1e66-441e-8e52-8476627a2a78 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.025540016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bcd00f96-14ce-44fe-bf76-caea55841ea8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.026631462Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=be335b1b-d4b8-4502-9a4e-ec381e18f4eb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.02678351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031196864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031374379Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca728880da80faf1c8b0614d01ff15bed541abed6852b0736fb2c2a38694e85f/merged/etc/passwd: no such file or directory"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.031410388Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca728880da80faf1c8b0614d01ff15bed541abed6852b0736fb2c2a38694e85f/merged/etc/group: no such file or directory"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.03184585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.059405341Z" level=info msg="Created container db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00: kube-system/storage-provisioner/storage-provisioner" id=be335b1b-d4b8-4502-9a4e-ec381e18f4eb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.060059602Z" level=info msg="Starting container: db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00" id=79fe71eb-f2fc-4ee6-90b4-3a9636f2df6e name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:44 no-preload-092439 crio[565]: time="2025-12-10T23:05:44.062189317Z" level=info msg="Started container" PID=1753 containerID=db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00 description=kube-system/storage-provisioner/storage-provisioner id=79fe71eb-f2fc-4ee6-90b4-3a9636f2df6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc37a4c29e3836b2719e63650de5284f6f3af27aded1a2b393d434a51f938c18
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	db89ec35a1bd7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   bc37a4c29e383       storage-provisioner                          kube-system
	4da3c51cf46fa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   09da54d6dde52       dashboard-metrics-scraper-867fb5f87b-xdj7x   kubernetes-dashboard
	efaff6f1c0d44       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   2e4f96b27f068       kubernetes-dashboard-b84665fb8-6jlnk         kubernetes-dashboard
	356f2cafeda33       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   1dbe084bcabac       busybox                                      default
	3bf4f7155c432       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   5ce7a2f2982f5       kindnet-k4tzd                                kube-system
	c3ef0ffa9ede8       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   df39d7191ed05       coredns-7d764666f9-5tpb8                     kube-system
	8bc13140b3261       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   bc37a4c29e383       storage-provisioner                          kube-system
	8173be3b7c05b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           50 seconds ago      Running             kube-proxy                  0                   b988469fc2627       kube-proxy-gqz42                             kube-system
	6ee26bc7c96ed       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           52 seconds ago      Running             kube-controller-manager     0                   3b9af0adb9fc2       kube-controller-manager-no-preload-092439    kube-system
	47d48a88aaf2f       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           52 seconds ago      Running             kube-apiserver              0                   c11379ffe99d1       kube-apiserver-no-preload-092439             kube-system
	017238cc878d0       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           52 seconds ago      Running             kube-scheduler              0                   4e1f3673e53a8       kube-scheduler-no-preload-092439             kube-system
	9e0d3af710c80       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   cd2393da550a6       etcd-no-preload-092439                       kube-system
	
	
	==> coredns [c3ef0ffa9ede88313d5564b45adfa71559c2b439d058f7d23b302fa80b482168] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47905 - 4035 "HINFO IN 6110967146731182700.5495061107359195155. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018571511s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-092439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-092439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=no-preload-092439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_04_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-092439
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:05:43 +0000   Wed, 10 Dec 2025 23:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-092439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                bf869612-dadc-4e0f-a9d5-5bc2846c3b03
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-5tpb8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-092439                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-k4tzd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-092439              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-092439     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-gqz42                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-092439              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-xdj7x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6jlnk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  104s  node-controller  Node no-preload-092439 event: Registered Node no-preload-092439 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node no-preload-092439 event: Registered Node no-preload-092439 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [9e0d3af710c80ebeda3c5932ad2b93927ce199ee5ed52ebdded84495b7ed024b] <==
	{"level":"warn","ts":"2025-12-10T23:05:12.359905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.367821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.374419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:12.418759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:18.188021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.273867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316548 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:532 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:18.188148Z","caller":"traceutil/trace.go:172","msg":"trace[1219693578] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"309.026663ms","start":"2025-12-10T23:05:17.879108Z","end":"2025-12-10T23:05:18.188135Z","steps":["trace[1219693578] 'process raft request'  (duration: 63.09679ms)","trace[1219693578] 'compare'  (duration: 245.18141ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:18.188202Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T23:05:17.879082Z","time spent":"309.094609ms","remote":"127.0.0.1:55882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":690,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:532 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >"}
	{"level":"info","ts":"2025-12-10T23:05:18.400920Z","caller":"traceutil/trace.go:172","msg":"trace[2121665406] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"174.688177ms","start":"2025-12-10T23:05:18.226203Z","end":"2025-12-10T23:05:18.400891Z","steps":["trace[2121665406] 'process raft request'  (duration: 174.540101ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:18.603406Z","caller":"traceutil/trace.go:172","msg":"trace[1791242683] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"198.109934ms","start":"2025-12-10T23:05:18.405267Z","end":"2025-12-10T23:05:18.603377Z","steps":["trace[1791242683] 'process raft request'  (duration: 99.19311ms)","trace[1791242683] 'compare'  (duration: 98.787761ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:18.788694Z","caller":"traceutil/trace.go:172","msg":"trace[506927479] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"131.562814ms","start":"2025-12-10T23:05:18.657110Z","end":"2025-12-10T23:05:18.788673Z","steps":["trace[506927479] 'process raft request'  (duration: 131.399194ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.002804Z","caller":"traceutil/trace.go:172","msg":"trace[1236189220] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"116.523882ms","start":"2025-12-10T23:05:18.886259Z","end":"2025-12-10T23:05:19.002783Z","steps":["trace[1236189220] 'process raft request'  (duration: 116.356522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:05:19.248573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.767969ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316576 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" mod_revision:544 > success:<request_put:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" value_size:613 lease:6571766750712316445 >> failure:<request_range:<key:\"/registry/events/default/no-preload-092439.187ffd2433cf43fb\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:19.248666Z","caller":"traceutil/trace.go:172","msg":"trace[146087888] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"239.510516ms","start":"2025-12-10T23:05:19.009126Z","end":"2025-12-10T23:05:19.248636Z","steps":["trace[146087888] 'process raft request'  (duration: 114.623404ms)","trace[146087888] 'compare'  (duration: 124.662196ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:19.530973Z","caller":"traceutil/trace.go:172","msg":"trace[1386041226] linearizableReadLoop","detail":"{readStateIndex:580; appliedIndex:580; }","duration":"200.452034ms","start":"2025-12-10T23:05:19.330501Z","end":"2025-12-10T23:05:19.530953Z","steps":["trace[1386041226] 'read index received'  (duration: 200.423225ms)","trace[1386041226] 'applied index is now lower than readState.Index'  (duration: 27.746µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:19.535356Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.836219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-5tpb8\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-10T23:05:19.535412Z","caller":"traceutil/trace.go:172","msg":"trace[1894048739] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-5tpb8; range_end:; response_count:1; response_revision:549; }","duration":"204.903121ms","start":"2025-12-10T23:05:19.330495Z","end":"2025-12-10T23:05:19.535398Z","steps":["trace[1894048739] 'agreement among raft nodes before linearized reading'  (duration: 200.510876ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T23:05:19.535465Z","caller":"traceutil/trace.go:172","msg":"trace[1047098104] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"245.294545ms","start":"2025-12-10T23:05:19.290153Z","end":"2025-12-10T23:05:19.535448Z","steps":["trace[1047098104] 'process raft request'  (duration: 240.834663ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T23:05:20.007104Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.737085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-5tpb8\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-10T23:05:20.007204Z","caller":"traceutil/trace.go:172","msg":"trace[385365805] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-5tpb8; range_end:; response_count:1; response_revision:558; }","duration":"176.848191ms","start":"2025-12-10T23:05:19.830340Z","end":"2025-12-10T23:05:20.007188Z","steps":["trace[385365805] 'agreement among raft nodes before linearized reading'  (duration: 45.180264ms)","trace[385365805] 'range keys from in-memory index tree'  (duration: 131.381865ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:20.007163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.513935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766750712316598 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-no-preload-092439.187ffd244e6918cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-no-preload-092439.187ffd244e6918cf\" value_size:763 lease:6571766750712316445 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T23:05:20.007432Z","caller":"traceutil/trace.go:172","msg":"trace[729272160] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"257.193702ms","start":"2025-12-10T23:05:19.750223Z","end":"2025-12-10T23:05:20.007417Z","steps":["trace[729272160] 'process raft request'  (duration: 125.357933ms)","trace[729272160] 'compare'  (duration: 131.192697ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:20.123881Z","caller":"traceutil/trace.go:172","msg":"trace[1615713273] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"111.107509ms","start":"2025-12-10T23:05:20.012755Z","end":"2025-12-10T23:05:20.123862Z","steps":["trace[1615713273] 'process raft request'  (duration: 94.219469ms)","trace[1615713273] 'compare'  (duration: 16.78465ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:05:20.123901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.003507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-092439\" limit:1 ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2025-12-10T23:05:20.123959Z","caller":"traceutil/trace.go:172","msg":"trace[1559935575] range","detail":"{range_begin:/registry/minions/no-preload-092439; range_end:; response_count:1; response_revision:559; }","duration":"110.072045ms","start":"2025-12-10T23:05:20.013867Z","end":"2025-12-10T23:05:20.123940Z","steps":["trace[1559935575] 'agreement among raft nodes before linearized reading'  (duration: 93.062123ms)","trace[1559935575] 'range keys from in-memory index tree'  (duration: 16.843384ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T23:05:25.181877Z","caller":"traceutil/trace.go:172","msg":"trace[1690121058] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"207.625505ms","start":"2025-12-10T23:05:24.974227Z","end":"2025-12-10T23:05:25.181853Z","steps":["trace[1690121058] 'process raft request'  (duration: 145.655078ms)","trace[1690121058] 'compare'  (duration: 61.85062ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:06:04 up 48 min,  0 user,  load average: 4.45, 2.90, 1.86
	Linux no-preload-092439 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3bf4f7155c432603b41a6c12c2954315b96cd1a34c84a2e13f9a7a39e46ef3cd] <==
	I1210 23:05:13.641580       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:13.676710       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 23:05:13.677000       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:13.677058       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:13.677092       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:13.942217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:13.976808       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:13.976850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:14.076951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:14.282509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:14.282603       1 metrics.go:72] Registering metrics
	I1210 23:05:14.282741       1 controller.go:711] "Syncing nftables rules"
	I1210 23:05:23.942929       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:23.942964       1 main.go:301] handling current node
	I1210 23:05:33.944773       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:33.944847       1 main.go:301] handling current node
	I1210 23:05:43.942640       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:43.942690       1 main.go:301] handling current node
	I1210 23:05:53.947130       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:05:53.947167       1 main.go:301] handling current node
	I1210 23:06:03.950752       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 23:06:03.950790       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47d48a88aaf2f336aaf052c8e06ba295472eb4a8dc9582731814742da2d715a2] <==
	I1210 23:05:12.927410       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:12.927466       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 23:05:12.927702       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:05:12.927714       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:05:12.927720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:05:12.927726       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:05:12.935298       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:05:12.935335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:05:12.939315       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:12.939336       1 policy_source.go:248] refreshing policies
	E1210 23:05:12.940567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:05:12.963820       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:12.965782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:13.080201       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:05:13.402892       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:05:13.457245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:05:13.496295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:13.512885       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:13.616810       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.69.135"}
	I1210 23:05:13.638096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.27.37"}
	I1210 23:05:13.822985       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:05:16.516992       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:05:16.618507       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:05:16.667008       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:05:16.667008       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6ee26bc7c96ed586eff3850cfe0f16397254e657370bfce96dc19153353ccd40] <==
	I1210 23:05:16.078866       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079404       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079434       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.079694       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.080369       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.080455       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:16.081233       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081333       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081350       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.082944       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081370       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.081363       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.083171       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.085726       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.085748       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087417       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087444       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.087798       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.088011       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.088727       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090270       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090281       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:16.090287       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:05:16.090293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:05:16.181539       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8173be3b7c05b175f4824b0b205d6e0ac2d5ea31cc37448e3cf92b819a82793d] <==
	I1210 23:05:13.486799       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:05:13.604819       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:13.705529       1 shared_informer.go:377] "Caches are synced"
	I1210 23:05:13.705571       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 23:05:13.705714       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:05:13.729694       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:13.729746       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:05:13.735731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:05:13.736136       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:05:13.736159       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:13.737535       1 config.go:200] "Starting service config controller"
	I1210 23:05:13.737950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:05:13.737660       1 config.go:309] "Starting node config controller"
	I1210 23:05:13.738072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:05:13.738121       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:05:13.738042       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:05:13.738182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:05:13.738029       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:05:13.738231       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:05:13.838983       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:05:13.839012       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:05:13.839012       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [017238cc878d0c921bda71833bcc5f0f7afe24f51f551351e5cf67faa077db1e] <==
	I1210 23:05:11.634228       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:05:12.841626       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:05:12.841674       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:05:12.841687       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:05:12.841696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:05:12.899158       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 23:05:12.899206       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:12.903040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:05:12.903073       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:05:12.905081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:05:12.905135       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:05:13.004164       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:05:24 no-preload-092439 kubelet[710]: E1210 23:05:24.967370     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-092439" containerName="kube-scheduler"
	Dec 10 23:05:24 no-preload-092439 kubelet[710]: E1210 23:05:24.967446     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.176838     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-092439" containerName="etcd"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.980151     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" containerName="kubernetes-dashboard"
	Dec 10 23:05:28 no-preload-092439 kubelet[710]: E1210 23:05:28.980268     710 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-092439" containerName="etcd"
	Dec 10 23:05:29 no-preload-092439 kubelet[710]: E1210 23:05:29.982787     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" containerName="kubernetes-dashboard"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: E1210 23:05:30.213032     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: I1210 23:05:30.213071     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:30 no-preload-092439 kubelet[710]: E1210 23:05:30.213265     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.896927     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.896971     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.998938     710 scope.go:122] "RemoveContainer" containerID="cbb1a304260f7af1cb6c351679164c061a3e7821d68535ba2e811bb1872b898a"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.999172     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: I1210 23:05:34.999204     710 scope.go:122] "RemoveContainer" containerID="4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	Dec 10 23:05:34 no-preload-092439 kubelet[710]: E1210 23:05:34.999377     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:35 no-preload-092439 kubelet[710]: I1210 23:05:35.011468     710 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6jlnk" podStartSLOduration=8.156583477 podStartE2EDuration="19.011448354s" podCreationTimestamp="2025-12-10 23:05:16 +0000 UTC" firstStartedPulling="2025-12-10 23:05:17.554018187 +0000 UTC m=+6.779532057" lastFinishedPulling="2025-12-10 23:05:28.408883068 +0000 UTC m=+17.634396934" observedRunningTime="2025-12-10 23:05:28.992124293 +0000 UTC m=+18.217638179" watchObservedRunningTime="2025-12-10 23:05:35.011448354 +0000 UTC m=+24.236962243"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: E1210 23:05:40.212785     710 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" containerName="dashboard-metrics-scraper"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: I1210 23:05:40.212825     710 scope.go:122] "RemoveContainer" containerID="4da3c51cf46fae6bdddab470079f5bf5d28d274192054bc31b5ccc1e95d8aea4"
	Dec 10 23:05:40 no-preload-092439 kubelet[710]: E1210 23:05:40.213047     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xdj7x_kubernetes-dashboard(912bc880-63fd-46fe-a46f-0d75bc93b41d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xdj7x" podUID="912bc880-63fd-46fe-a46f-0d75bc93b41d"
	Dec 10 23:05:44 no-preload-092439 kubelet[710]: I1210 23:05:44.024086     710 scope.go:122] "RemoveContainer" containerID="8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714"
	Dec 10 23:05:45 no-preload-092439 kubelet[710]: E1210 23:05:45.444611     710 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5tpb8" containerName="coredns"
	Dec 10 23:05:59 no-preload-092439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:05:59 no-preload-092439 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:05:59 no-preload-092439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:05:59 no-preload-092439 systemd[1]: kubelet.service: Consumed 1.684s CPU time.
	
	
	==> kubernetes-dashboard [efaff6f1c0d447f3a95b88462b70ba567fd5e84495d41c94a6391c6242b8dad5] <==
	2025/12/10 23:05:28 Starting overwatch
	2025/12/10 23:05:28 Using namespace: kubernetes-dashboard
	2025/12/10 23:05:28 Using in-cluster config to connect to apiserver
	2025/12/10 23:05:28 Using secret token for csrf signing
	2025/12/10 23:05:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:05:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:05:28 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/10 23:05:28 Generating JWE encryption key
	2025/12/10 23:05:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:05:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:05:28 Initializing JWE encryption key from synchronized object
	2025/12/10 23:05:28 Creating in-cluster Sidecar client
	2025/12/10 23:05:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:05:28 Serving insecurely on HTTP port: 9090
	2025/12/10 23:05:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8bc13140b32614befb9d3296f1726c9cac7a33943c7c9a3af2c2027b2bfee714] <==
	I1210 23:05:13.373604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:05:43.382041       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db89ec35a1bd7345923a82c7e64becea75c7acbdd0609d5f65b9f58344c1fd00] <==
	I1210 23:05:44.074754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:05:44.081922       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:05:44.081963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:05:44.084049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:47.538995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:51.799518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:55.398271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:58.452108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.475522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.480394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:01.480580       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:06:01.480668       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e18e7f7-4d20-4032-b0ec-4af7afe85afe", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19 became leader
	I1210 23:06:01.480701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19!
	W1210 23:06:01.485560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.491744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:01.581913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-092439_0efcb2dd-8afb-49c6-8cbd-3115b15bde19!
	W1210 23:06:03.494997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:03.499732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-092439 -n no-preload-092439
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-092439 -n no-preload-092439: exit status 2 (364.590406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-092439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.16456ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-468067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-468067 describe deploy/metrics-server -n kube-system: exit status 1 (74.556876ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-468067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-468067
helpers_test.go:244: (dbg) docker inspect embed-certs-468067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	        "Created": "2025-12-10T23:05:20.332136032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:05:20.952451608Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8-json.log",
	        "Name": "/embed-certs-468067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-468067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-468067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	                "LowerDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-468067",
	                "Source": "/var/lib/docker/volumes/embed-certs-468067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-468067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-468067",
	                "name.minikube.sigs.k8s.io": "embed-certs-468067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5534456bb44709f14d3dfbceceebf655cd6228e2d4755ec62969648433cb4bc3",
	            "SandboxKey": "/var/run/docker/netns/5534456bb447",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-468067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62dd6bab6a632f3b3d47ad53284a920285184de444b92fe6a92c9c747bea6de0",
	                    "EndpointID": "0b9a26da1f5d3233e4cb5c0f5300bc677b8a65c27ded781bd7daeaa797979d71",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ea:7c:c1:41:14:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-468067",
	                        "4b27d4853e79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25: (1.174954011s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:03 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-280530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p old-k8s-version-280530 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-092439 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │                     │
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                         │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                            │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                                      │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:01.642216  288470 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:01.642541  288470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:01.642553  288470 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:01.642558  288470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:01.642793  288470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:01.643346  288470 out.go:368] Setting JSON to false
	I1210 23:06:01.644756  288470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1765405058,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:01.644821  288470 start.go:143] virtualization: kvm guest
	I1210 23:06:01.646925  288470 out.go:179] * [newest-cni-852445] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:01.648063  288470 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:01.648103  288470 notify.go:221] Checking for updates...
	I1210 23:06:01.650395  288470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:01.651669  288470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:01.653457  288470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:01.654993  288470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:01.656390  288470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:01.658190  288470 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:01.658422  288470 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:01.658613  288470 config.go:182] Loaded profile config "no-preload-092439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:01.658745  288470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:01.685746  288470 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:01.685917  288470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:01.753389  288470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 23:06:01.741287277 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:01.753543  288470 docker.go:319] overlay module found
	I1210 23:06:01.755674  288470 out.go:179] * Using the docker driver based on user configuration
	I1210 23:06:01.757237  288470 start.go:309] selected driver: docker
	I1210 23:06:01.757256  288470 start.go:927] validating driver "docker" against <nil>
	I1210 23:06:01.757272  288470 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:01.757963  288470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:01.831341  288470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 23:06:01.821067975 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:01.831531  288470 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	W1210 23:06:01.831579  288470 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 23:06:01.831860  288470 start_flags.go:1150] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 23:06:01.834212  288470 out.go:179] * Using Docker driver with root privileges
	I1210 23:06:01.835361  288470 cni.go:84] Creating CNI manager for ""
	I1210 23:06:01.835435  288470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:01.835476  288470 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:06:01.835579  288470 start.go:353] cluster config:
	{Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:01.837185  288470 out.go:179] * Starting "newest-cni-852445" primary control-plane node in "newest-cni-852445" cluster
	I1210 23:06:01.838397  288470 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:01.839748  288470 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:01.840857  288470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:06:01.840894  288470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:01.840913  288470 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:01.840984  288470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:01.841040  288470 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:01.841054  288470 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 23:06:01.841152  288470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json ...
	I1210 23:06:01.841173  288470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json: {Name:mk43e6d471aefff014120e3637224bdb2b726b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:01.863520  288470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:01.863541  288470 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:01.863566  288470 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:01.863601  288470 start.go:360] acquireMachinesLock for newest-cni-852445: {Name:mk113c2684b04f857c1d54dc6179d89c7f0645fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:01.864050  288470 start.go:364] duration metric: took 423.463µs to acquireMachinesLock for "newest-cni-852445"
	I1210 23:06:01.864096  288470 start.go:93] Provisioning new machine with config: &{Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:01.864190  288470 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:06:01.964330  279952 node_ready.go:49] node "default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:01.964366  279952 node_ready.go:38] duration metric: took 11.004624823s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:06:01.964398  279952 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:01.964453  279952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:01.984892  279952 api_server.go:72] duration metric: took 11.297122361s to wait for apiserver process to appear ...
	I1210 23:06:01.984937  279952 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:01.984960  279952 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 23:06:01.991758  279952 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1210 23:06:01.992914  279952 api_server.go:141] control plane version: v1.34.2
	I1210 23:06:01.992993  279952 api_server.go:131] duration metric: took 7.997274ms to wait for apiserver health ...
	I1210 23:06:01.993008  279952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:01.996716  279952 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:01.996744  279952 system_pods.go:61] "coredns-66bc5c9577-s8zsm" [24faae58-d6c6-42ad-93d3-3d160895982e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:01.996750  279952 system_pods.go:61] "etcd-default-k8s-diff-port-443884" [306255e6-2652-4217-ade8-a96f119869f2] Running
	I1210 23:06:01.996756  279952 system_pods.go:61] "kindnet-wtcv9" [d5d31b10-60af-4ff4-bb38-44edc65ef3d3] Running
	I1210 23:06:01.996760  279952 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-443884" [4fb15273-fe29-41cc-9e81-99448e6f455a] Running
	I1210 23:06:01.996763  279952 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-443884" [df38c0f6-f94b-404f-b33c-c6c522b7a29e] Running
	I1210 23:06:01.996767  279952 system_pods.go:61] "kube-proxy-lwnhd" [fcf815a4-e235-459b-b10a-31761cb8ad21] Running
	I1210 23:06:01.996774  279952 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-443884" [7bb103ce-e5ca-49af-948f-735d76edbdd0] Running
	I1210 23:06:01.996782  279952 system_pods.go:61] "storage-provisioner" [81e22dd7-170e-4dfb-abf8-96dde77438ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:01.996788  279952 system_pods.go:74] duration metric: took 3.773556ms to wait for pod list to return data ...
	I1210 23:06:01.996804  279952 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:01.999064  279952 default_sa.go:45] found service account: "default"
	I1210 23:06:01.999084  279952 default_sa.go:55] duration metric: took 2.271171ms for default service account to be created ...
	I1210 23:06:01.999093  279952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:06:02.001985  279952 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:02.002015  279952 system_pods.go:89] "coredns-66bc5c9577-s8zsm" [24faae58-d6c6-42ad-93d3-3d160895982e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:02.002034  279952 system_pods.go:89] "etcd-default-k8s-diff-port-443884" [306255e6-2652-4217-ade8-a96f119869f2] Running
	I1210 23:06:02.002047  279952 system_pods.go:89] "kindnet-wtcv9" [d5d31b10-60af-4ff4-bb38-44edc65ef3d3] Running
	I1210 23:06:02.002052  279952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-443884" [4fb15273-fe29-41cc-9e81-99448e6f455a] Running
	I1210 23:06:02.002058  279952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-443884" [df38c0f6-f94b-404f-b33c-c6c522b7a29e] Running
	I1210 23:06:02.002067  279952 system_pods.go:89] "kube-proxy-lwnhd" [fcf815a4-e235-459b-b10a-31761cb8ad21] Running
	I1210 23:06:02.002072  279952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-443884" [7bb103ce-e5ca-49af-948f-735d76edbdd0] Running
	I1210 23:06:02.002089  279952 system_pods.go:89] "storage-provisioner" [81e22dd7-170e-4dfb-abf8-96dde77438ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:02.002115  279952 retry.go:31] will retry after 240.974301ms: missing components: kube-dns
	I1210 23:06:02.247956  279952 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:02.247988  279952 system_pods.go:89] "coredns-66bc5c9577-s8zsm" [24faae58-d6c6-42ad-93d3-3d160895982e] Running
	I1210 23:06:02.247997  279952 system_pods.go:89] "etcd-default-k8s-diff-port-443884" [306255e6-2652-4217-ade8-a96f119869f2] Running
	I1210 23:06:02.248003  279952 system_pods.go:89] "kindnet-wtcv9" [d5d31b10-60af-4ff4-bb38-44edc65ef3d3] Running
	I1210 23:06:02.248008  279952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-443884" [4fb15273-fe29-41cc-9e81-99448e6f455a] Running
	I1210 23:06:02.248014  279952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-443884" [df38c0f6-f94b-404f-b33c-c6c522b7a29e] Running
	I1210 23:06:02.248019  279952 system_pods.go:89] "kube-proxy-lwnhd" [fcf815a4-e235-459b-b10a-31761cb8ad21] Running
	I1210 23:06:02.248024  279952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-443884" [7bb103ce-e5ca-49af-948f-735d76edbdd0] Running
	I1210 23:06:02.248028  279952 system_pods.go:89] "storage-provisioner" [81e22dd7-170e-4dfb-abf8-96dde77438ac] Running
	I1210 23:06:02.248038  279952 system_pods.go:126] duration metric: took 248.937727ms to wait for k8s-apps to be running ...
	I1210 23:06:02.248047  279952 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:06:02.248106  279952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:02.265829  279952 system_svc.go:56] duration metric: took 17.771132ms WaitForService to wait for kubelet
	I1210 23:06:02.265874  279952 kubeadm.go:587] duration metric: took 11.578108743s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:02.265905  279952 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:02.269489  279952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:02.269524  279952 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:02.269541  279952 node_conditions.go:105] duration metric: took 3.626656ms to run NodePressure ...
	I1210 23:06:02.269556  279952 start.go:242] waiting for startup goroutines ...
	I1210 23:06:02.269566  279952 start.go:247] waiting for cluster config update ...
	I1210 23:06:02.269579  279952 start.go:256] writing updated cluster config ...
	I1210 23:06:02.269922  279952 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:02.274996  279952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:02.279705  279952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8zsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.285348  279952 pod_ready.go:94] pod "coredns-66bc5c9577-s8zsm" is "Ready"
	I1210 23:06:02.285379  279952 pod_ready.go:86] duration metric: took 5.652574ms for pod "coredns-66bc5c9577-s8zsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.287978  279952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.293606  279952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:02.293628  279952 pod_ready.go:86] duration metric: took 5.545949ms for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.296176  279952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.301234  279952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:02.301261  279952 pod_ready.go:86] duration metric: took 5.063175ms for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.303488  279952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.679493  279952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:02.679524  279952 pod_ready.go:86] duration metric: took 376.014525ms for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:02.879784  279952 pod_ready.go:83] waiting for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:03.279694  279952 pod_ready.go:94] pod "kube-proxy-lwnhd" is "Ready"
	I1210 23:06:03.279736  279952 pod_ready.go:86] duration metric: took 399.912621ms for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:03.480184  279952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:03.880257  279952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:03.880289  279952 pod_ready.go:86] duration metric: took 400.078017ms for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:03.880303  279952 pod_ready.go:40] duration metric: took 1.605266878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:03.935743  279952 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:06:03.940720  279952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-443884" cluster and "default" namespace by default
	I1210 23:06:01.871078  288470 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:06:01.871436  288470 start.go:159] libmachine.API.Create for "newest-cni-852445" (driver="docker")
	I1210 23:06:01.871483  288470 client.go:173] LocalClient.Create starting
	I1210 23:06:01.871595  288470 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:06:01.871653  288470 main.go:143] libmachine: Decoding PEM data...
	I1210 23:06:01.871680  288470 main.go:143] libmachine: Parsing certificate...
	I1210 23:06:01.871765  288470 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:06:01.871793  288470 main.go:143] libmachine: Decoding PEM data...
	I1210 23:06:01.871812  288470 main.go:143] libmachine: Parsing certificate...
	I1210 23:06:01.872271  288470 cli_runner.go:164] Run: docker network inspect newest-cni-852445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:06:01.895599  288470 cli_runner.go:211] docker network inspect newest-cni-852445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:06:01.895711  288470 network_create.go:284] running [docker network inspect newest-cni-852445] to gather additional debugging logs...
	I1210 23:06:01.895736  288470 cli_runner.go:164] Run: docker network inspect newest-cni-852445
	W1210 23:06:01.916365  288470 cli_runner.go:211] docker network inspect newest-cni-852445 returned with exit code 1
	I1210 23:06:01.916412  288470 network_create.go:287] error running [docker network inspect newest-cni-852445]: docker network inspect newest-cni-852445: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-852445 not found
	I1210 23:06:01.916434  288470 network_create.go:289] output of [docker network inspect newest-cni-852445]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-852445 not found
	
	** /stderr **
	I1210 23:06:01.916538  288470 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:01.941226  288470 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:06:01.942233  288470 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:06:01.943183  288470 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:06:01.943966  288470 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8875699386e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:89:d4:9b:b9:bc} reservation:<nil>}
	I1210 23:06:01.944952  288470 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e0fae0}
	I1210 23:06:01.944980  288470 network_create.go:124] attempt to create docker network newest-cni-852445 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 23:06:01.945034  288470 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-852445 newest-cni-852445
	I1210 23:06:02.011207  288470 network_create.go:108] docker network newest-cni-852445 192.168.85.0/24 created
	I1210 23:06:02.011241  288470 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-852445" container
	I1210 23:06:02.011322  288470 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:06:02.031223  288470 cli_runner.go:164] Run: docker volume create newest-cni-852445 --label name.minikube.sigs.k8s.io=newest-cni-852445 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:06:02.049711  288470 oci.go:103] Successfully created a docker volume newest-cni-852445
	I1210 23:06:02.049797  288470 cli_runner.go:164] Run: docker run --rm --name newest-cni-852445-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-852445 --entrypoint /usr/bin/test -v newest-cni-852445:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:06:02.507237  288470 oci.go:107] Successfully prepared a docker volume newest-cni-852445
	I1210 23:06:02.507302  288470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:06:02.507313  288470 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:06:02.507392  288470 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-852445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:06:06.583484  288470 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-852445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.076037125s)
	I1210 23:06:06.583520  288470 kic.go:203] duration metric: took 4.076202936s to extract preloaded images to volume ...
	W1210 23:06:06.583598  288470 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:06:06.583671  288470 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:06:06.583722  288470 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	
	
	==> CRI-O <==
	Dec 10 23:05:56 embed-certs-468067 crio[777]: time="2025-12-10T23:05:56.352549212Z" level=info msg="Starting container: 5c453473edbad630cba7f5c737687c450f8104a1fd888e899a68122423fa0bf8" id=67e1a52c-5fac-422c-9e91-d5ce7d4f9e4f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:05:56 embed-certs-468067 crio[777]: time="2025-12-10T23:05:56.354910495Z" level=info msg="Started container" PID=1874 containerID=5c453473edbad630cba7f5c737687c450f8104a1fd888e899a68122423fa0bf8 description=kube-system/coredns-66bc5c9577-qw48c/coredns id=67e1a52c-5fac-422c-9e91-d5ce7d4f9e4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d43d44185a8c0881b24d518981ebc819154dfdd9fe107f38ba09830f7c02366
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.943789413Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5477682b-3f23-41a8-9935-8948f12d1eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.94386268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.948536017Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9672a920c03c7641de3e019376ed3c8b266c747ad4d121b414bb6caa76cba01a UID:3e157d1d-e99f-4f73-a95d-a881d3d14cc4 NetNS:/var/run/netns/e1e0eb89-1631-4968-be5f-4ad09865c8ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa28}] Aliases:map[]}"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.948573345Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.958772656Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9672a920c03c7641de3e019376ed3c8b266c747ad4d121b414bb6caa76cba01a UID:3e157d1d-e99f-4f73-a95d-a881d3d14cc4 NetNS:/var/run/netns/e1e0eb89-1631-4968-be5f-4ad09865c8ca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa28}] Aliases:map[]}"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.959304912Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.960714312Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.961865385Z" level=info msg="Ran pod sandbox 9672a920c03c7641de3e019376ed3c8b266c747ad4d121b414bb6caa76cba01a with infra container: default/busybox/POD" id=5477682b-3f23-41a8-9935-8948f12d1eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.963193999Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9396166-34e5-4aba-b0d0-bebd8b35133a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.963356845Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c9396166-34e5-4aba-b0d0-bebd8b35133a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.963415279Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c9396166-34e5-4aba-b0d0-bebd8b35133a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.964417396Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8928d5c7-8545-41c1-8456-8a8c52356ba7 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:05:59 embed-certs-468067 crio[777]: time="2025-12-10T23:05:59.969116909Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.284826816Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8928d5c7-8545-41c1-8456-8a8c52356ba7 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.285632771Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11f4aad7-7c6f-4e27-bad2-bf38c03c9cc8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.287419816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63c5ad6d-626d-4d71-98ae-01d46236198a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.291162423Z" level=info msg="Creating container: default/busybox/busybox" id=66abdba1-cabc-456f-93b4-79af356cea1c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.29131041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.296419956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.297019583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.32591813Z" level=info msg="Created container d06fc0ce3e8f98b62da2172f238721e790efac182ff28489ecd14ddb85b2bfeb: default/busybox/busybox" id=66abdba1-cabc-456f-93b4-79af356cea1c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.326627843Z" level=info msg="Starting container: d06fc0ce3e8f98b62da2172f238721e790efac182ff28489ecd14ddb85b2bfeb" id=a4a1abc8-aaea-4a4d-8724-53cc1cd62ba9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:01 embed-certs-468067 crio[777]: time="2025-12-10T23:06:01.328410923Z" level=info msg="Started container" PID=1949 containerID=d06fc0ce3e8f98b62da2172f238721e790efac182ff28489ecd14ddb85b2bfeb description=default/busybox/busybox id=a4a1abc8-aaea-4a4d-8724-53cc1cd62ba9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9672a920c03c7641de3e019376ed3c8b266c747ad4d121b414bb6caa76cba01a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d06fc0ce3e8f9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   9672a920c03c7       busybox                                      default
	5c453473edbad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   3d43d44185a8c       coredns-66bc5c9577-qw48c                     kube-system
	f558d37c2f7ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   fb3e5c224ea87       storage-provisioner                          kube-system
	4785658906ef3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   bd7ee5a32b7f7       kindnet-dkdlj                                kube-system
	13954d9e2297c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   50b6e82e6fd83       kube-proxy-27pft                             kube-system
	b8f753ff84d9b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      32 seconds ago      Running             kube-apiserver            0                   343f0be292022       kube-apiserver-embed-certs-468067            kube-system
	29c782b134b47       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      32 seconds ago      Running             etcd                      0                   d49bdfc8d17ed       etcd-embed-certs-468067                      kube-system
	e1c90c7a606ae       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      32 seconds ago      Running             kube-controller-manager   0                   b5cb245418ef1       kube-controller-manager-embed-certs-468067   kube-system
	789136f5fd50c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      32 seconds ago      Running             kube-scheduler            0                   3d8a4cc88ab4c       kube-scheduler-embed-certs-468067            kube-system
	
	
	==> coredns [5c453473edbad630cba7f5c737687c450f8104a1fd888e899a68122423fa0bf8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42339 - 61509 "HINFO IN 611337319487739201.8878640766527770132. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.416630667s
	
	
	==> describe nodes <==
	Name:               embed-certs-468067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-468067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=embed-certs-468067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-468067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:05:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:05:59 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:05:59 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:05:59 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:05:59 +0000   Wed, 10 Dec 2025 23:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-468067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                d2cd28f2-4471-41b6-a37d-4eadfd61fbb3
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-qw48c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-468067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-dkdlj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-468067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-468067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-27pft                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-468067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-468067 event: Registered Node embed-certs-468067 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-468067 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [29c782b134b473b41daaf256a7e00bcc9c1b9543b512d662cf4b1c0aee357e4b] <==
	{"level":"warn","ts":"2025-12-10T23:05:36.363243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.369850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.377557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.385281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.393836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.401456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.407676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.415827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.424451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.431726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.439210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.447498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.454195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.462553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.470969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.478321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.486597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.495355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.503641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.511615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.531790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.540280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.550156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:36.614118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T23:06:06.557135Z","caller":"traceutil/trace.go:172","msg":"trace[2064007399] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"132.83302ms","start":"2025-12-10T23:06:06.424277Z","end":"2025-12-10T23:06:06.557110Z","steps":["trace[2064007399] 'process raft request'  (duration: 132.670547ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:06:08 up 48 min,  0 user,  load average: 4.25, 2.89, 1.86
	Linux embed-certs-468067 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4785658906ef3de44b26745a06be87d0e5a1769660da9312fc2a5607c46576d7] <==
	I1210 23:05:45.258049       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:45.258352       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 23:05:45.258487       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:45.258506       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:45.258547       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:45.466780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:45.556514       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:45.557356       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:45.633540       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:45.933967       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:45.933999       1 metrics.go:72] Registering metrics
	I1210 23:05:45.934075       1 controller.go:711] "Syncing nftables rules"
	I1210 23:05:55.466902       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:05:55.466957       1 main.go:301] handling current node
	I1210 23:06:05.466996       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:06:05.467036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b8f753ff84d9bda73bbee832782edb92f9599240565cfa50457e1ee0d10759ce] <==
	I1210 23:05:37.143842       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:05:37.143892       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:05:37.143924       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:05:37.148477       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 23:05:37.150061       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:37.153624       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 23:05:37.165613       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:38.046662       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 23:05:38.050578       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 23:05:38.050598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:05:38.539577       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:38.575226       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:38.652534       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 23:05:38.660415       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1210 23:05:38.661744       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:05:38.668612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:05:39.339122       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:05:39.465533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:05:39.474340       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 23:05:39.483780       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 23:05:44.593793       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 23:05:45.044637       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:05:45.393021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:45.397617       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1210 23:06:06.774255       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:40104: use of closed network connection
	
	
	==> kube-controller-manager [e1c90c7a606aee56054d6e4d5a2ab31e89fbb3290807fea359f256200fab7dc8] <==
	I1210 23:05:44.339152       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 23:05:44.339171       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 23:05:44.339193       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 23:05:44.339203       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:05:44.339211       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 23:05:44.339220       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 23:05:44.339233       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:05:44.339251       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:05:44.339274       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:05:44.339547       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 23:05:44.339683       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:05:44.342631       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:05:44.350848       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:05:44.353990       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 23:05:44.354057       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 23:05:44.354110       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 23:05:44.354118       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 23:05:44.354122       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 23:05:44.356283       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 23:05:44.356415       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 23:05:44.356510       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-468067"
	I1210 23:05:44.356567       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 23:05:44.361476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-468067" podCIDRs=["10.244.0.0/24"]
	I1210 23:05:44.365588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:05:59.358355       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [13954d9e2297c82e4ac06559941686faa99347efd04669b95ad608effabb83e1] <==
	I1210 23:05:45.019704       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:05:45.091766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:05:45.192234       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:05:45.192286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 23:05:45.192380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:05:45.215136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:45.215210       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:05:45.220359       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:05:45.220813       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:05:45.220850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:45.222398       1 config.go:200] "Starting service config controller"
	I1210 23:05:45.222420       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:05:45.222424       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:05:45.222556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:05:45.222576       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:05:45.222954       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:05:45.223085       1 config.go:309] "Starting node config controller"
	I1210 23:05:45.223110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:05:45.223148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:05:45.322597       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:05:45.324026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:05:45.324041       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [789136f5fd50cacf5c03668bd63dbdb2ca074d02253609ff2022586ab243f4dd] <==
	E1210 23:05:37.113928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:05:37.113985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:05:37.114369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:05:37.114449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:05:37.114449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:05:37.114514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 23:05:37.114540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 23:05:37.114550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 23:05:37.114599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:05:37.114620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:05:37.114737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 23:05:37.114835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 23:05:37.114962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 23:05:37.115091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:05:37.115303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 23:05:37.976673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:05:38.041391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:05:38.109889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 23:05:38.129269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:05:38.274831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 23:05:38.279799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 23:05:38.290969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:05:38.311119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:05:38.520954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 23:05:41.411215       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:05:40 embed-certs-468067 kubelet[1331]: I1210 23:05:40.343470    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-468067" podStartSLOduration=1.343430847 podStartE2EDuration="1.343430847s" podCreationTimestamp="2025-12-10 23:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:40.343081395 +0000 UTC m=+1.134389826" watchObservedRunningTime="2025-12-10 23:05:40.343430847 +0000 UTC m=+1.134739274"
	Dec 10 23:05:40 embed-certs-468067 kubelet[1331]: I1210 23:05:40.356842    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-468067" podStartSLOduration=1.356818864 podStartE2EDuration="1.356818864s" podCreationTimestamp="2025-12-10 23:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:40.354765842 +0000 UTC m=+1.146074272" watchObservedRunningTime="2025-12-10 23:05:40.356818864 +0000 UTC m=+1.148127294"
	Dec 10 23:05:40 embed-certs-468067 kubelet[1331]: I1210 23:05:40.376513    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-468067" podStartSLOduration=1.376494528 podStartE2EDuration="1.376494528s" podCreationTimestamp="2025-12-10 23:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:40.364348284 +0000 UTC m=+1.155656726" watchObservedRunningTime="2025-12-10 23:05:40.376494528 +0000 UTC m=+1.167803017"
	Dec 10 23:05:40 embed-certs-468067 kubelet[1331]: I1210 23:05:40.376613    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-468067" podStartSLOduration=1.376605417 podStartE2EDuration="1.376605417s" podCreationTimestamp="2025-12-10 23:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:40.376260622 +0000 UTC m=+1.167569052" watchObservedRunningTime="2025-12-10 23:05:40.376605417 +0000 UTC m=+1.167913847"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.430052    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.430858    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724223    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0837f94b-4c23-4d59-9718-dcf9b2f5a276-lib-modules\") pod \"kindnet-dkdlj\" (UID: \"0837f94b-4c23-4d59-9718-dcf9b2f5a276\") " pod="kube-system/kindnet-dkdlj"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724265    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56fhj\" (UniqueName: \"kubernetes.io/projected/a31d4ae8-642f-4d74-9bf7-726ec7a2dacb-kube-api-access-56fhj\") pod \"kube-proxy-27pft\" (UID: \"a31d4ae8-642f-4d74-9bf7-726ec7a2dacb\") " pod="kube-system/kube-proxy-27pft"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724284    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a31d4ae8-642f-4d74-9bf7-726ec7a2dacb-xtables-lock\") pod \"kube-proxy-27pft\" (UID: \"a31d4ae8-642f-4d74-9bf7-726ec7a2dacb\") " pod="kube-system/kube-proxy-27pft"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724299    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a31d4ae8-642f-4d74-9bf7-726ec7a2dacb-lib-modules\") pod \"kube-proxy-27pft\" (UID: \"a31d4ae8-642f-4d74-9bf7-726ec7a2dacb\") " pod="kube-system/kube-proxy-27pft"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724325    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0837f94b-4c23-4d59-9718-dcf9b2f5a276-cni-cfg\") pod \"kindnet-dkdlj\" (UID: \"0837f94b-4c23-4d59-9718-dcf9b2f5a276\") " pod="kube-system/kindnet-dkdlj"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724348    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0837f94b-4c23-4d59-9718-dcf9b2f5a276-xtables-lock\") pod \"kindnet-dkdlj\" (UID: \"0837f94b-4c23-4d59-9718-dcf9b2f5a276\") " pod="kube-system/kindnet-dkdlj"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724373    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpq9\" (UniqueName: \"kubernetes.io/projected/0837f94b-4c23-4d59-9718-dcf9b2f5a276-kube-api-access-cbpq9\") pod \"kindnet-dkdlj\" (UID: \"0837f94b-4c23-4d59-9718-dcf9b2f5a276\") " pod="kube-system/kindnet-dkdlj"
	Dec 10 23:05:44 embed-certs-468067 kubelet[1331]: I1210 23:05:44.724435    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a31d4ae8-642f-4d74-9bf7-726ec7a2dacb-kube-proxy\") pod \"kube-proxy-27pft\" (UID: \"a31d4ae8-642f-4d74-9bf7-726ec7a2dacb\") " pod="kube-system/kube-proxy-27pft"
	Dec 10 23:05:45 embed-certs-468067 kubelet[1331]: I1210 23:05:45.338208    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dkdlj" podStartSLOduration=1.338182786 podStartE2EDuration="1.338182786s" podCreationTimestamp="2025-12-10 23:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:45.337979112 +0000 UTC m=+6.129287544" watchObservedRunningTime="2025-12-10 23:05:45.338182786 +0000 UTC m=+6.129491216"
	Dec 10 23:05:46 embed-certs-468067 kubelet[1331]: I1210 23:05:46.488872    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27pft" podStartSLOduration=2.488850053 podStartE2EDuration="2.488850053s" podCreationTimestamp="2025-12-10 23:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:45.34742439 +0000 UTC m=+6.138732820" watchObservedRunningTime="2025-12-10 23:05:46.488850053 +0000 UTC m=+7.280158463"
	Dec 10 23:05:55 embed-certs-468067 kubelet[1331]: I1210 23:05:55.970570    1331 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 23:05:56 embed-certs-468067 kubelet[1331]: I1210 23:05:56.111630    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d3a4070-1f4d-4958-8748-0d5c00f296ec-config-volume\") pod \"coredns-66bc5c9577-qw48c\" (UID: \"9d3a4070-1f4d-4958-8748-0d5c00f296ec\") " pod="kube-system/coredns-66bc5c9577-qw48c"
	Dec 10 23:05:56 embed-certs-468067 kubelet[1331]: I1210 23:05:56.111699    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqgj\" (UniqueName: \"kubernetes.io/projected/9d3a4070-1f4d-4958-8748-0d5c00f296ec-kube-api-access-zjqgj\") pod \"coredns-66bc5c9577-qw48c\" (UID: \"9d3a4070-1f4d-4958-8748-0d5c00f296ec\") " pod="kube-system/coredns-66bc5c9577-qw48c"
	Dec 10 23:05:56 embed-certs-468067 kubelet[1331]: I1210 23:05:56.111728    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cba94e39-8a92-4cf5-a616-80857c063c22-tmp\") pod \"storage-provisioner\" (UID: \"cba94e39-8a92-4cf5-a616-80857c063c22\") " pod="kube-system/storage-provisioner"
	Dec 10 23:05:56 embed-certs-468067 kubelet[1331]: I1210 23:05:56.111783    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7l8d\" (UniqueName: \"kubernetes.io/projected/cba94e39-8a92-4cf5-a616-80857c063c22-kube-api-access-h7l8d\") pod \"storage-provisioner\" (UID: \"cba94e39-8a92-4cf5-a616-80857c063c22\") " pod="kube-system/storage-provisioner"
	Dec 10 23:05:57 embed-certs-468067 kubelet[1331]: I1210 23:05:57.391826    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.391802161 podStartE2EDuration="12.391802161s" podCreationTimestamp="2025-12-10 23:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:56.367038097 +0000 UTC m=+17.158346527" watchObservedRunningTime="2025-12-10 23:05:57.391802161 +0000 UTC m=+18.183110592"
	Dec 10 23:05:57 embed-certs-468067 kubelet[1331]: I1210 23:05:57.392509    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qw48c" podStartSLOduration=12.392488847 podStartE2EDuration="12.392488847s" podCreationTimestamp="2025-12-10 23:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:57.391696668 +0000 UTC m=+18.183005100" watchObservedRunningTime="2025-12-10 23:05:57.392488847 +0000 UTC m=+18.183797277"
	Dec 10 23:05:59 embed-certs-468067 kubelet[1331]: I1210 23:05:59.734200    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2xs\" (UniqueName: \"kubernetes.io/projected/3e157d1d-e99f-4f73-a95d-a881d3d14cc4-kube-api-access-7m2xs\") pod \"busybox\" (UID: \"3e157d1d-e99f-4f73-a95d-a881d3d14cc4\") " pod="default/busybox"
	Dec 10 23:06:01 embed-certs-468067 kubelet[1331]: I1210 23:06:01.387966    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.064976257 podStartE2EDuration="2.387939359s" podCreationTimestamp="2025-12-10 23:05:59 +0000 UTC" firstStartedPulling="2025-12-10 23:05:59.963812528 +0000 UTC m=+20.755120956" lastFinishedPulling="2025-12-10 23:06:01.286775629 +0000 UTC m=+22.078084058" observedRunningTime="2025-12-10 23:06:01.387243529 +0000 UTC m=+22.178551960" watchObservedRunningTime="2025-12-10 23:06:01.387939359 +0000 UTC m=+22.179247788"
	
	
	==> storage-provisioner [f558d37c2f7acb39a5ef045cfbd60eb184069cc1e85998588bcbdc1c081b5140] <==
	I1210 23:05:56.355688       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:05:56.365541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:05:56.365600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:05:56.368301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:56.374820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:05:56.375001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:05:56.375815       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_530319d6-cfe5-41d7-bb2c-626e812103e0!
	I1210 23:05:56.376247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48704f4d-8b51-4c73-91f7-52bbe5715cf0", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-468067_530319d6-cfe5-41d7-bb2c-626e812103e0 became leader
	W1210 23:05:56.379754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:56.385232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:05:56.476099       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_530319d6-cfe5-41d7-bb2c-626e812103e0!
	W1210 23:05:58.388943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:05:58.393820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:00.397598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:00.402245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:02.406147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:02.410794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:04.414848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:04.418895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:06.422238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:06.558379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:08.562206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:08.566409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-468067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.273276ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-443884 describe deploy/metrics-server -n kube-system: exit status 1 (62.556809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-443884 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-443884
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-443884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	        "Created": "2025-12-10T23:05:26.959123143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:05:27.249431976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hosts",
	        "LogPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b-json.log",
	        "Name": "/default-k8s-diff-port-443884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-443884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-443884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	                "LowerDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-443884",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-443884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-443884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e4b5773e32a758b7cf4abf0d9dfc613db1458ab4a54feb9f38d0f5a50db0226d",
	            "SandboxKey": "/var/run/docker/netns/e4b5773e32a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-443884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8875699386e55b3c7ba6f71ae6cb594bed837dd60b39b87d708bd26d3360a926",
	                    "EndpointID": "153f8c34300d53da53043fffcad11e344045be5887735678260785fe65521076",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8a:41:20:d5:b0:11",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-443884",
	                        "a8275652c47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25: (1.070014325s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p no-preload-092439 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:04 UTC │
	│ start   │ -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:04 UTC │ 10 Dec 25 23:05 UTC │
	│ addons  │ enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                         │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                            │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                                      │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p embed-certs-468067 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:08.356287  291593 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:08.356547  291593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:08.356556  291593 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:08.356561  291593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:08.356782  291593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:08.357392  291593 out.go:368] Setting JSON to false
	I1210 23:06:08.358795  291593 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2910,"bootTime":1765405058,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:08.358870  291593 start.go:143] virtualization: kvm guest
	I1210 23:06:08.361329  291593 out.go:179] * [auto-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:08.363743  291593 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:08.363757  291593 notify.go:221] Checking for updates...
	I1210 23:06:08.366768  291593 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:08.368275  291593 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:08.369663  291593 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:08.371030  291593 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:08.373416  291593 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:08.375237  291593 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:08.375361  291593 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:08.375495  291593 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:08.375611  291593 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:08.404608  291593 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:08.404759  291593 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:08.464448  291593 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 23:06:08.453445809 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:08.464597  291593 docker.go:319] overlay module found
	I1210 23:06:08.466106  291593 out.go:179] * Using the docker driver based on user configuration
	I1210 23:06:08.467395  291593 start.go:309] selected driver: docker
	I1210 23:06:08.467411  291593 start.go:927] validating driver "docker" against <nil>
	I1210 23:06:08.467425  291593 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:08.468119  291593 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:08.532838  291593 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 23:06:08.521613224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:08.533051  291593 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:06:08.533349  291593 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:08.535225  291593 out.go:179] * Using Docker driver with root privileges
	I1210 23:06:08.536472  291593 cni.go:84] Creating CNI manager for ""
	I1210 23:06:08.536544  291593 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:08.536557  291593 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:06:08.536635  291593 start.go:353] cluster config:
	{Name:auto-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:08.538712  291593 out.go:179] * Starting "auto-177285" primary control-plane node in "auto-177285" cluster
	I1210 23:06:08.540364  291593 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:08.541775  291593 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:08.543067  291593 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:08.543104  291593 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:08.543116  291593 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:08.543159  291593 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:08.543236  291593 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:08.543250  291593 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:06:08.543374  291593 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/auto-177285/config.json ...
	I1210 23:06:08.543396  291593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/auto-177285/config.json: {Name:mke6aac824f08b707afafa5413ea1be443142e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:08.568245  291593 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:08.568284  291593 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:08.568303  291593 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:08.568339  291593 start.go:360] acquireMachinesLock for auto-177285: {Name:mk1036fe0d75deb57e47d00d805fea5661cf328a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:08.568460  291593 start.go:364] duration metric: took 100.774µs to acquireMachinesLock for "auto-177285"
	I1210 23:06:08.568490  291593 start.go:93] Provisioning new machine with config: &{Name:auto-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-177285 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:08.568588  291593 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:06:06.651981  288470 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-852445 --name newest-cni-852445 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-852445 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-852445 --network newest-cni-852445 --ip 192.168.85.2 --volume newest-cni-852445:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:06:07.192577  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Running}}
	I1210 23:06:07.216266  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:07.241767  288470 cli_runner.go:164] Run: docker exec newest-cni-852445 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:06:07.307375  288470 oci.go:144] the created container "newest-cni-852445" has a running status.
	I1210 23:06:07.307413  288470 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa...
	I1210 23:06:07.391168  288470 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:06:07.850214  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:07.872240  288470 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:06:07.872260  288470 kic_runner.go:114] Args: [docker exec --privileged newest-cni-852445 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:06:07.924932  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:07.948597  288470 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:07.948731  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:07.970341  288470 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:07.970679  288470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1210 23:06:07.970701  288470 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:08.116479  288470 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:08.116513  288470 ubuntu.go:182] provisioning hostname "newest-cni-852445"
	I1210 23:06:08.116577  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:08.141159  288470 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:08.141404  288470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1210 23:06:08.141421  288470 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852445 && echo "newest-cni-852445" | sudo tee /etc/hostname
	I1210 23:06:08.293005  288470 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:08.293132  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:08.315854  288470 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:08.316149  288470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1210 23:06:08.316166  288470 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:06:08.456675  288470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:06:08.456706  288470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:06:08.456736  288470 ubuntu.go:190] setting up certificates
	I1210 23:06:08.456754  288470 provision.go:84] configureAuth start
	I1210 23:06:08.456856  288470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:08.478216  288470 provision.go:143] copyHostCerts
	I1210 23:06:08.478299  288470 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:06:08.478316  288470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:06:08.478408  288470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:06:08.478536  288470 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:06:08.478550  288470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:06:08.478589  288470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:06:08.478692  288470 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:06:08.478704  288470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:06:08.478743  288470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:06:08.478822  288470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852445]
	I1210 23:06:08.718776  288470 provision.go:177] copyRemoteCerts
	I1210 23:06:08.718835  288470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:06:08.718880  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:08.739422  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:08.840489  288470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:06:08.864400  288470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:06:08.883534  288470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:06:08.903665  288470 provision.go:87] duration metric: took 446.893696ms to configureAuth
	I1210 23:06:08.903698  288470 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:06:08.903859  288470 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:08.903964  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:08.928029  288470 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:08.928308  288470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1210 23:06:08.928341  288470 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:06:09.246689  288470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:06:09.246716  288470 machine.go:97] duration metric: took 1.298090363s to provisionDockerMachine
	I1210 23:06:09.246731  288470 client.go:176] duration metric: took 7.375237406s to LocalClient.Create
	I1210 23:06:09.246747  288470 start.go:167] duration metric: took 7.375316289s to libmachine.API.Create "newest-cni-852445"
	I1210 23:06:09.246757  288470 start.go:293] postStartSetup for "newest-cni-852445" (driver="docker")
	I1210 23:06:09.246771  288470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:06:09.246842  288470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:06:09.246932  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:09.269124  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:09.382687  288470 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:06:09.386692  288470 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:06:09.386727  288470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:06:09.386740  288470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:06:09.386797  288470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:06:09.386907  288470 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:06:09.387036  288470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:06:09.395428  288470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:09.418740  288470 start.go:296] duration metric: took 171.962289ms for postStartSetup
	I1210 23:06:09.419201  288470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:09.449062  288470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json ...
	I1210 23:06:09.449561  288470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:06:09.449619  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:09.475007  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:09.573959  288470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:06:09.578774  288470 start.go:128] duration metric: took 7.714565736s to createHost
	I1210 23:06:09.578808  288470 start.go:83] releasing machines lock for "newest-cni-852445", held for 7.714732126s
	I1210 23:06:09.578887  288470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:09.599070  288470 ssh_runner.go:195] Run: cat /version.json
	I1210 23:06:09.599124  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:09.599138  288470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:06:09.599246  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:09.618391  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:09.619579  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:09.767067  288470 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:09.774066  288470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:06:09.812498  288470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:06:09.817550  288470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:06:09.817621  288470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:06:09.846994  288470 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:06:09.847022  288470 start.go:496] detecting cgroup driver to use...
	I1210 23:06:09.847057  288470 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:06:09.847104  288470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:06:09.864934  288470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:06:09.878877  288470 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:06:09.878939  288470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:06:09.896563  288470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:06:09.916521  288470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:06:10.004860  288470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:06:10.113342  288470 docker.go:234] disabling docker service ...
	I1210 23:06:10.113409  288470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:06:10.134118  288470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:06:10.148179  288470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:06:10.240272  288470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:06:10.336365  288470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:06:10.349240  288470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:06:10.364056  288470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:06:10.364129  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.377878  288470 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:06:10.377933  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.387971  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.397497  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.407357  288470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:06:10.415951  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.425123  288470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.441963  288470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:10.452159  288470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:06:10.461483  288470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:06:10.469905  288470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:10.585459  288470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:06:12.635904  288470 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.050395824s)
	I1210 23:06:12.635939  288470 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:06:12.635991  288470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:06:12.640596  288470 start.go:564] Will wait 60s for crictl version
	I1210 23:06:12.640672  288470 ssh_runner.go:195] Run: which crictl
	I1210 23:06:12.644799  288470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:06:12.671097  288470 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:06:12.671182  288470 ssh_runner.go:195] Run: crio --version
	I1210 23:06:12.702867  288470 ssh_runner.go:195] Run: crio --version
	I1210 23:06:12.735753  288470 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 23:06:12.738719  288470 cli_runner.go:164] Run: docker network inspect newest-cni-852445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:12.758070  288470 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 23:06:12.763253  288470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:12.778227  288470 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 23:06:08.571381  291593 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:06:08.571635  291593 start.go:159] libmachine.API.Create for "auto-177285" (driver="docker")
	I1210 23:06:08.571687  291593 client.go:173] LocalClient.Create starting
	I1210 23:06:08.571775  291593 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:06:08.571815  291593 main.go:143] libmachine: Decoding PEM data...
	I1210 23:06:08.571839  291593 main.go:143] libmachine: Parsing certificate...
	I1210 23:06:08.571901  291593 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:06:08.571929  291593 main.go:143] libmachine: Decoding PEM data...
	I1210 23:06:08.571949  291593 main.go:143] libmachine: Parsing certificate...
	I1210 23:06:08.572307  291593 cli_runner.go:164] Run: docker network inspect auto-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:06:08.591213  291593 cli_runner.go:211] docker network inspect auto-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:06:08.591279  291593 network_create.go:284] running [docker network inspect auto-177285] to gather additional debugging logs...
	I1210 23:06:08.591298  291593 cli_runner.go:164] Run: docker network inspect auto-177285
	W1210 23:06:08.609920  291593 cli_runner.go:211] docker network inspect auto-177285 returned with exit code 1
	I1210 23:06:08.609952  291593 network_create.go:287] error running [docker network inspect auto-177285]: docker network inspect auto-177285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-177285 not found
	I1210 23:06:08.609972  291593 network_create.go:289] output of [docker network inspect auto-177285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-177285 not found
	
	** /stderr **
	I1210 23:06:08.610062  291593 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:08.629154  291593 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:06:08.630099  291593 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:06:08.631098  291593 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:06:08.631850  291593 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8875699386e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:89:d4:9b:b9:bc} reservation:<nil>}
	I1210 23:06:08.632693  291593 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cb4831c90c0c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:21:e6:11:b4:b4} reservation:<nil>}
	I1210 23:06:08.633577  291593 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f06080}
	I1210 23:06:08.633601  291593 network_create.go:124] attempt to create docker network auto-177285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:06:08.633661  291593 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-177285 auto-177285
	I1210 23:06:08.692739  291593 network_create.go:108] docker network auto-177285 192.168.94.0/24 created
	I1210 23:06:08.692776  291593 kic.go:121] calculated static IP "192.168.94.2" for the "auto-177285" container
	I1210 23:06:08.692859  291593 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:06:08.713653  291593 cli_runner.go:164] Run: docker volume create auto-177285 --label name.minikube.sigs.k8s.io=auto-177285 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:06:08.733697  291593 oci.go:103] Successfully created a docker volume auto-177285
	I1210 23:06:08.733774  291593 cli_runner.go:164] Run: docker run --rm --name auto-177285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-177285 --entrypoint /usr/bin/test -v auto-177285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:06:09.173866  291593 oci.go:107] Successfully prepared a docker volume auto-177285
	I1210 23:06:09.173944  291593 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:09.173962  291593 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:06:09.174036  291593 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:06:12.599464  291593 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.425381364s)
	I1210 23:06:12.599526  291593 kic.go:203] duration metric: took 3.425549989s to extract preloaded images to volume ...
	W1210 23:06:12.599634  291593 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:06:12.599696  291593 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:06:12.599751  291593 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:06:12.659410  291593 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-177285 --name auto-177285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-177285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-177285 --network auto-177285 --ip 192.168.94.2 --volume auto-177285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:06:12.956198  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Running}}
	I1210 23:06:12.976450  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:12.995916  291593 cli_runner.go:164] Run: docker exec auto-177285 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:06:13.049131  291593 oci.go:144] the created container "auto-177285" has a running status.
	I1210 23:06:13.049159  291593 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa...
	I1210 23:06:13.089000  291593 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:06:13.126756  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:13.148992  291593 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:06:13.149015  291593 kic_runner.go:114] Args: [docker exec --privileged auto-177285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:06:13.196977  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:13.222305  291593 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:13.222411  291593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-177285
	I1210 23:06:13.243236  291593 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:13.243526  291593 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1210 23:06:13.243546  291593 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:13.244231  291593 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59044->127.0.0.1:33094: read: connection reset by peer
	
	
	==> CRI-O <==
	Dec 10 23:06:01 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:01.92678805Z" level=info msg="Starting container: 15477ac26978a01e700ae3bd7fc7003ebb1d99d437f0c976d9a162f62873e239" id=3837a453-e06a-49c4-af93-6c441854cde3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:01 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:01.929172328Z" level=info msg="Started container" PID=1844 containerID=15477ac26978a01e700ae3bd7fc7003ebb1d99d437f0c976d9a162f62873e239 description=kube-system/coredns-66bc5c9577-s8zsm/coredns id=3837a453-e06a-49c4-af93-6c441854cde3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cad79281f75a535b158d2bbd6df71e34c22e9b9d3c2447cb82b0074b98fb597
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.437669035Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8b4b0d5c-a443-43de-88e0-9405a712ac05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.437775192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.445812953Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:608232bc2acc067b0720412f95d9f403d787bfeba07a0f67b366b7f807d5994f UID:c0dc1efe-3497-4123-8574-5fff0265cf3e NetNS:/var/run/netns/d37139cf-c719-4410-9bd5-ddfb1d7aaa7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d02380}] Aliases:map[]}"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.445845869Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.45827717Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:608232bc2acc067b0720412f95d9f403d787bfeba07a0f67b366b7f807d5994f UID:c0dc1efe-3497-4123-8574-5fff0265cf3e NetNS:/var/run/netns/d37139cf-c719-4410-9bd5-ddfb1d7aaa7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d02380}] Aliases:map[]}"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.458447241Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.459477803Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.460739591Z" level=info msg="Ran pod sandbox 608232bc2acc067b0720412f95d9f403d787bfeba07a0f67b366b7f807d5994f with infra container: default/busybox/POD" id=8b4b0d5c-a443-43de-88e0-9405a712ac05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.462119234Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac8e3221-af7b-45c7-9159-904ffeae9f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.462257537Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ac8e3221-af7b-45c7-9159-904ffeae9f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.462301429Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ac8e3221-af7b-45c7-9159-904ffeae9f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.462983387Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e24e321-b436-4f5d-b720-48b9a3bbc00e name=/runtime.v1.ImageService/PullImage
	Dec 10 23:06:04 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:04.465336945Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.5958095Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6e24e321-b436-4f5d-b720-48b9a3bbc00e name=/runtime.v1.ImageService/PullImage
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.596621376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1fd1d3fc-b332-428b-a7ca-b52946b9d84c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.598036115Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8b8cdd10-5b52-4a27-b5d5-fea8064ecd80 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.602731155Z" level=info msg="Creating container: default/busybox/busybox" id=00e590f5-855e-4246-83ea-07a826edc961 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.602871495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.608084471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.608675709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.634564316Z" level=info msg="Created container 61507e642e869eb0f3ff69e3d92507bfc67d1eb41d8fc8fe750aa2d22616f8f7: default/busybox/busybox" id=00e590f5-855e-4246-83ea-07a826edc961 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.635300467Z" level=info msg="Starting container: 61507e642e869eb0f3ff69e3d92507bfc67d1eb41d8fc8fe750aa2d22616f8f7" id=14857a36-6b46-4a43-a694-a0d7e7f284cb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:06 default-k8s-diff-port-443884 crio[775]: time="2025-12-10T23:06:06.637298803Z" level=info msg="Started container" PID=1922 containerID=61507e642e869eb0f3ff69e3d92507bfc67d1eb41d8fc8fe750aa2d22616f8f7 description=default/busybox/busybox id=14857a36-6b46-4a43-a694-a0d7e7f284cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=608232bc2acc067b0720412f95d9f403d787bfeba07a0f67b366b7f807d5994f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	61507e642e869       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   608232bc2acc0       busybox                                                default
	15477ac26978a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   6cad79281f75a       coredns-66bc5c9577-s8zsm                               kube-system
	0a743dcf02351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c7033d6ecc192       storage-provisioner                                    kube-system
	83bd6ae050936       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   cb2440f280e0e       kube-proxy-lwnhd                                       kube-system
	78221e2d24404       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   d67c78a5716a2       kindnet-wtcv9                                          kube-system
	e28ba768ddab3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   1341329ba6389       etcd-default-k8s-diff-port-443884                      kube-system
	e7bf7be4a065a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   361743f8c0fbb       kube-apiserver-default-k8s-diff-port-443884            kube-system
	6aaaa7e407449       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   1ae230a5a3f24       kube-scheduler-default-k8s-diff-port-443884            kube-system
	f00cd67ddece2       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   04aa3b4444aee       kube-controller-manager-default-k8s-diff-port-443884   kube-system
	
	
	==> coredns [15477ac26978a01e700ae3bd7fc7003ebb1d99d437f0c976d9a162f62873e239] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36117 - 1925 "HINFO IN 4243968879227889692.1199713093780154139. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039447585s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-443884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-443884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=default-k8s-diff-port-443884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-443884
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:06:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:06:01 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:06:01 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:06:01 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:06:01 +0000   Wed, 10 Dec 2025 23:06:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-443884
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                9e4f21fa-7258-4d07-9208-772a36f1e976
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-s8zsm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-443884                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-wtcv9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-443884             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-443884    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-lwnhd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-443884             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-443884 event: Registered Node default-k8s-diff-port-443884 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-443884 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [e28ba768ddab30a3d59fcf3424e89ae8dd6c5e6752731a50b6cddd601392f4d2] <==
	{"level":"warn","ts":"2025-12-10T23:05:41.680397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.687272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.695956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.703216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.711337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.719022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.725950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.733618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.740937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.748949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.760742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.767632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.786231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.797562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.804160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.812117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.819744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.828793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.836264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.844309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.850834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.869490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.877763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.884879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:05:41.943811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49776","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:06:14 up 48 min,  0 user,  load average: 4.22, 2.92, 1.89
	Linux default-k8s-diff-port-443884 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78221e2d24404bd2300ee7d928e76620798bc03d7771c2f04f621b537da55b65] <==
	I1210 23:05:51.134005       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:05:51.134399       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 23:05:51.134549       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:05:51.134573       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:05:51.134596       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:05:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:05:51.433602       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:05:51.533339       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:05:51.533384       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:05:51.533628       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:05:51.833996       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:05:51.834028       1 metrics.go:72] Registering metrics
	I1210 23:05:51.834087       1 controller.go:711] "Syncing nftables rules"
	I1210 23:06:01.344264       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:06:01.344326       1 main.go:301] handling current node
	I1210 23:06:11.344598       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:06:11.344636       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e7bf7be4a065a874880152fd46d8bc7a5ad064f32f988b2fb9383157c6c8907e] <==
	I1210 23:05:42.527514       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:05:42.530236       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:42.530311       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 23:05:42.535616       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:42.535683       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 23:05:42.629392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:05:43.330847       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 23:05:43.335094       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 23:05:43.335114       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:05:43.811718       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:05:43.846565       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:05:43.935380       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 23:05:43.943408       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 23:05:43.944511       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:05:43.948928       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:05:44.402343       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:05:45.188326       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:05:45.199161       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 23:05:45.206875       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 23:05:50.206407       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:50.209820       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:05:50.354384       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:05:50.503810       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 23:05:50.503810       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 23:06:13.255110       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:57256: use of closed network connection
	
	
	==> kube-controller-manager [f00cd67ddece2bc498b6c1bb048b9b5345968e82ff242777405e08927dcfe068] <==
	I1210 23:05:49.379677       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 23:05:49.400395       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 23:05:49.401565       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 23:05:49.401584       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:05:49.401595       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 23:05:49.401685       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 23:05:49.401775       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 23:05:49.401976       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:05:49.402000       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 23:05:49.402143       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 23:05:49.402346       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 23:05:49.402371       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:05:49.403703       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:05:49.404108       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 23:05:49.405931       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 23:05:49.405973       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:05:49.405994       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 23:05:49.406048       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 23:05:49.406056       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 23:05:49.406084       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 23:05:49.406397       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:05:49.411040       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:05:49.411725       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-443884" podCIDRs=["10.244.0.0/24"]
	I1210 23:05:49.416890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:06:04.357936       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [83bd6ae050936da71656a912f289beb19b58e681924b110f5c2fc9fcbdf79923] <==
	I1210 23:05:50.951395       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:05:51.029790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:05:51.130786       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:05:51.130835       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 23:05:51.130934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:05:51.152684       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:05:51.152746       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:05:51.159301       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:05:51.159717       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:05:51.159738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:05:51.161393       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:05:51.161413       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:05:51.161513       1 config.go:200] "Starting service config controller"
	I1210 23:05:51.161520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:05:51.161540       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:05:51.161545       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:05:51.161696       1 config.go:309] "Starting node config controller"
	I1210 23:05:51.161704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:05:51.161711       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:05:51.262361       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:05:51.262376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:05:51.262406       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6aaaa7e4074498aec284d4e21cfe217e83d2077827e2ab6229a492b4bb393943] <==
	E1210 23:05:42.393846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:05:42.393863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 23:05:42.393870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:05:42.393976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 23:05:42.394335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:05:42.394333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 23:05:42.394326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:05:42.394322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 23:05:42.394433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 23:05:42.394468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:05:42.394496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 23:05:42.394554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:05:43.210597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 23:05:43.289509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 23:05:43.347713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:05:43.360865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:05:43.378079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 23:05:43.392803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:05:43.423088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 23:05:43.445512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:05:43.454633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 23:05:43.550203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:05:43.554117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:05:43.588170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1210 23:05:45.689126       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:05:46 default-k8s-diff-port-443884 kubelet[1322]: E1210 23:05:46.076272    1322 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-443884\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-443884"
	Dec 10 23:05:46 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:46.087963    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-443884" podStartSLOduration=1.087907141 podStartE2EDuration="1.087907141s" podCreationTimestamp="2025-12-10 23:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:46.087793546 +0000 UTC m=+1.152294409" watchObservedRunningTime="2025-12-10 23:05:46.087907141 +0000 UTC m=+1.152407969"
	Dec 10 23:05:46 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:46.100168    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-443884" podStartSLOduration=3.10013749 podStartE2EDuration="3.10013749s" podCreationTimestamp="2025-12-10 23:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:46.100009059 +0000 UTC m=+1.164509887" watchObservedRunningTime="2025-12-10 23:05:46.10013749 +0000 UTC m=+1.164638318"
	Dec 10 23:05:46 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:46.118586    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-443884" podStartSLOduration=1.118568188 podStartE2EDuration="1.118568188s" podCreationTimestamp="2025-12-10 23:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:46.110302366 +0000 UTC m=+1.174803196" watchObservedRunningTime="2025-12-10 23:05:46.118568188 +0000 UTC m=+1.183069061"
	Dec 10 23:05:46 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:46.127404    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-443884" podStartSLOduration=1.127372373 podStartE2EDuration="1.127372373s" podCreationTimestamp="2025-12-10 23:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:46.118692116 +0000 UTC m=+1.183192944" watchObservedRunningTime="2025-12-10 23:05:46.127372373 +0000 UTC m=+1.191873246"
	Dec 10 23:05:49 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:49.467885    1322 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 23:05:49 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:49.468579    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560542    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcf815a4-e235-459b-b10a-31761cb8ad21-lib-modules\") pod \"kube-proxy-lwnhd\" (UID: \"fcf815a4-e235-459b-b10a-31761cb8ad21\") " pod="kube-system/kube-proxy-lwnhd"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560577    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5d31b10-60af-4ff4-bb38-44edc65ef3d3-xtables-lock\") pod \"kindnet-wtcv9\" (UID: \"d5d31b10-60af-4ff4-bb38-44edc65ef3d3\") " pod="kube-system/kindnet-wtcv9"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560603    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5d31b10-60af-4ff4-bb38-44edc65ef3d3-lib-modules\") pod \"kindnet-wtcv9\" (UID: \"d5d31b10-60af-4ff4-bb38-44edc65ef3d3\") " pod="kube-system/kindnet-wtcv9"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560624    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcf815a4-e235-459b-b10a-31761cb8ad21-xtables-lock\") pod \"kube-proxy-lwnhd\" (UID: \"fcf815a4-e235-459b-b10a-31761cb8ad21\") " pod="kube-system/kube-proxy-lwnhd"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560666    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2qcw\" (UniqueName: \"kubernetes.io/projected/fcf815a4-e235-459b-b10a-31761cb8ad21-kube-api-access-m2qcw\") pod \"kube-proxy-lwnhd\" (UID: \"fcf815a4-e235-459b-b10a-31761cb8ad21\") " pod="kube-system/kube-proxy-lwnhd"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560707    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fcf815a4-e235-459b-b10a-31761cb8ad21-kube-proxy\") pod \"kube-proxy-lwnhd\" (UID: \"fcf815a4-e235-459b-b10a-31761cb8ad21\") " pod="kube-system/kube-proxy-lwnhd"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560729    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d5d31b10-60af-4ff4-bb38-44edc65ef3d3-cni-cfg\") pod \"kindnet-wtcv9\" (UID: \"d5d31b10-60af-4ff4-bb38-44edc65ef3d3\") " pod="kube-system/kindnet-wtcv9"
	Dec 10 23:05:50 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:50.560746    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42lc\" (UniqueName: \"kubernetes.io/projected/d5d31b10-60af-4ff4-bb38-44edc65ef3d3-kube-api-access-b42lc\") pod \"kindnet-wtcv9\" (UID: \"d5d31b10-60af-4ff4-bb38-44edc65ef3d3\") " pod="kube-system/kindnet-wtcv9"
	Dec 10 23:05:51 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:51.089294    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwnhd" podStartSLOduration=1.089274039 podStartE2EDuration="1.089274039s" podCreationTimestamp="2025-12-10 23:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:51.089183263 +0000 UTC m=+6.153684092" watchObservedRunningTime="2025-12-10 23:05:51.089274039 +0000 UTC m=+6.153774867"
	Dec 10 23:05:51 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:05:51.099167    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wtcv9" podStartSLOduration=1.09914308 podStartE2EDuration="1.09914308s" podCreationTimestamp="2025-12-10 23:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:05:51.098955871 +0000 UTC m=+6.163456698" watchObservedRunningTime="2025-12-10 23:05:51.09914308 +0000 UTC m=+6.163643907"
	Dec 10 23:06:01 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:01.526077    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 23:06:01 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:01.642199    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7r9c\" (UniqueName: \"kubernetes.io/projected/24faae58-d6c6-42ad-93d3-3d160895982e-kube-api-access-s7r9c\") pod \"coredns-66bc5c9577-s8zsm\" (UID: \"24faae58-d6c6-42ad-93d3-3d160895982e\") " pod="kube-system/coredns-66bc5c9577-s8zsm"
	Dec 10 23:06:01 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:01.642267    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/81e22dd7-170e-4dfb-abf8-96dde77438ac-tmp\") pod \"storage-provisioner\" (UID: \"81e22dd7-170e-4dfb-abf8-96dde77438ac\") " pod="kube-system/storage-provisioner"
	Dec 10 23:06:01 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:01.642292    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29mwg\" (UniqueName: \"kubernetes.io/projected/81e22dd7-170e-4dfb-abf8-96dde77438ac-kube-api-access-29mwg\") pod \"storage-provisioner\" (UID: \"81e22dd7-170e-4dfb-abf8-96dde77438ac\") " pod="kube-system/storage-provisioner"
	Dec 10 23:06:01 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:01.642392    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24faae58-d6c6-42ad-93d3-3d160895982e-config-volume\") pod \"coredns-66bc5c9577-s8zsm\" (UID: \"24faae58-d6c6-42ad-93d3-3d160895982e\") " pod="kube-system/coredns-66bc5c9577-s8zsm"
	Dec 10 23:06:02 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:02.117919    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.117891932 podStartE2EDuration="11.117891932s" podCreationTimestamp="2025-12-10 23:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:06:02.117445213 +0000 UTC m=+17.181946045" watchObservedRunningTime="2025-12-10 23:06:02.117891932 +0000 UTC m=+17.182392759"
	Dec 10 23:06:04 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:04.128871    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s8zsm" podStartSLOduration=14.128844657 podStartE2EDuration="14.128844657s" podCreationTimestamp="2025-12-10 23:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:06:02.130991366 +0000 UTC m=+17.195492200" watchObservedRunningTime="2025-12-10 23:06:04.128844657 +0000 UTC m=+19.193345485"
	Dec 10 23:06:04 default-k8s-diff-port-443884 kubelet[1322]: I1210 23:06:04.159990    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9sh\" (UniqueName: \"kubernetes.io/projected/c0dc1efe-3497-4123-8574-5fff0265cf3e-kube-api-access-2x9sh\") pod \"busybox\" (UID: \"c0dc1efe-3497-4123-8574-5fff0265cf3e\") " pod="default/busybox"
	
	
	==> storage-provisioner [0a743dcf02351e6285dbaa22e530cfa4ed262c1e6f56af91042c688457afe4f8] <==
	I1210 23:06:01.937462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:06:01.947317       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:06:01.947386       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:06:01.949572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.954886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:01.955070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:06:01.955122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df00d01c-2573-4975-bde4-5f3658985b9c", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-443884_068fa43f-3c3f-47e2-a0ce-19b8636edef9 became leader
	I1210 23:06:01.955279       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_068fa43f-3c3f-47e2-a0ce-19b8636edef9!
	W1210 23:06:01.959062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:01.966003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:06:02.056506       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_068fa43f-3c3f-47e2-a0ce-19b8636edef9!
	W1210 23:06:03.970142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:03.974893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:05.977920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:06.077149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:08.080211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:08.084336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:10.087538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:10.094103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:12.097468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:12.125825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:14.129357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:06:14.135139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.110027ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-852445
helpers_test.go:244: (dbg) docker inspect newest-cni-852445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	        "Created": "2025-12-10T23:06:06.670702652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:06.722877287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hosts",
	        "LogPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6-json.log",
	        "Name": "/newest-cni-852445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-852445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	                "LowerDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852445",
	                "Source": "/var/lib/docker/volumes/newest-cni-852445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852445",
	                "name.minikube.sigs.k8s.io": "newest-cni-852445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2177e147f11d270a32e4556572f4d020450fb11ad6c7a976ab6ed64dcefca8b5",
	            "SandboxKey": "/var/run/docker/netns/2177e147f11d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-852445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb4831c90c0ce0c1b95239c1b5316571db21d4f34ab86f2f57dcd68970eb2faf",
	                    "EndpointID": "27e954e46500a091af6ebbda6840c97a9bea8914e17b77c34881dc81d15f88c6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "de:ec:c7:05:79:eb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852445",
	                        "a578a88253cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-852445 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ start   │ -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p kubernetes-upgrade-000011                                                                                                                                                                                                                         │ kubernetes-upgrade-000011    │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p stopped-upgrade-679204                                                                                                                                                                                                                            │ stopped-upgrade-679204       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ delete  │ -p disable-driver-mounts-614588                                                                                                                                                                                                                      │ disable-driver-mounts-614588 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p embed-certs-468067 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-443884 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-468067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:25.921485  296906 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:25.921611  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:25.921621  296906 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:25.921628  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:25.921953  296906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:25.922501  296906 out.go:368] Setting JSON to false
	I1210 23:06:25.924020  296906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2928,"bootTime":1765405058,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:25.924099  296906 start.go:143] virtualization: kvm guest
	I1210 23:06:25.927918  296906 out.go:179] * [embed-certs-468067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:25.929720  296906 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:25.929715  296906 notify.go:221] Checking for updates...
	I1210 23:06:25.932679  296906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:25.934065  296906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:25.935252  296906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:25.938886  296906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:25.940212  296906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:25.943676  296906 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:25.944325  296906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:25.979227  296906 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:25.979390  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:26.065102  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-10 23:06:26.046807805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:26.065261  296906 docker.go:319] overlay module found
	I1210 23:06:26.067037  296906 out.go:179] * Using the docker driver based on existing profile
	I1210 23:06:26.069635  296906 start.go:309] selected driver: docker
	I1210 23:06:26.069675  296906 start.go:927] validating driver "docker" against &{Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:26.069814  296906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:26.070620  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:26.139583  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:79 SystemTime:2025-12-10 23:06:26.128336612 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:26.139920  296906 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:26.139963  296906 cni.go:84] Creating CNI manager for ""
	I1210 23:06:26.140035  296906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:26.140083  296906 start.go:353] cluster config:
	{Name:embed-certs-468067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-468067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:26.142906  296906 out.go:179] * Starting "embed-certs-468067" primary control-plane node in "embed-certs-468067" cluster
	I1210 23:06:26.144037  296906 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:26.145222  296906 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:26.146256  296906 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:26.146297  296906 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:26.146305  296906 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:26.146408  296906 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:26.146419  296906 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:06:26.146557  296906 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/embed-certs-468067/config.json ...
	I1210 23:06:26.146822  296906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:26.175449  296906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:26.175473  296906 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:26.175493  296906 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:26.175532  296906 start.go:360] acquireMachinesLock for embed-certs-468067: {Name:mkee06790bb55bf9682292e893abe3cf62b32e4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:26.175597  296906 start.go:364] duration metric: took 44.096µs to acquireMachinesLock for "embed-certs-468067"
	I1210 23:06:26.175619  296906 start.go:96] Skipping create...Using existing machine configuration
	I1210 23:06:26.175626  296906 fix.go:54] fixHost starting: 
	I1210 23:06:26.175909  296906 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:06:26.199824  296906 fix.go:112] recreateIfNeeded on embed-certs-468067: state=Stopped err=<nil>
	W1210 23:06:26.199861  296906 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 23:06:21.644693  288470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:06:21.649189  288470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 23:06:21.649212  288470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:06:21.662372  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:06:21.888176  288470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:06:21.888253  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:21.888308  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-852445 minikube.k8s.io/updated_at=2025_12_10T23_06_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=newest-cni-852445 minikube.k8s.io/primary=true
	I1210 23:06:21.899508  288470 ops.go:34] apiserver oom_adj: -16
	I1210 23:06:21.979326  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:22.479560  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:22.979579  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:23.479485  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:23.979738  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:24.480377  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:24.979800  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:25.479897  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:25.979411  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:26.479375  288470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:26.593050  288470 kubeadm.go:1114] duration metric: took 4.704859182s to wait for elevateKubeSystemPrivileges
	I1210 23:06:26.593096  288470 kubeadm.go:403] duration metric: took 12.715764605s to StartCluster
	I1210 23:06:26.593119  288470 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:26.593219  288470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:26.594708  288470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:26.594984  288470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:06:26.595009  288470 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:26.595294  288470 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:26.595352  288470 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:26.595436  288470 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-852445"
	I1210 23:06:26.595454  288470 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-852445"
	I1210 23:06:26.595481  288470 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:26.596024  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:26.596096  288470 addons.go:70] Setting default-storageclass=true in profile "newest-cni-852445"
	I1210 23:06:26.596125  288470 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852445"
	I1210 23:06:26.596445  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:26.598592  288470 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:26.600122  288470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:26.645905  288470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:26.647124  288470 addons.go:239] Setting addon default-storageclass=true in "newest-cni-852445"
	I1210 23:06:26.647213  288470 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:26.647725  288470 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:26.647926  288470 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:26.647938  288470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:26.647995  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:26.685201  288470 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:26.685226  288470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:26.685319  288470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:26.688609  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:26.721783  288470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:26.755770  288470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:06:26.868889  288470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:26.946075  288470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:26.953533  288470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:27.158246  288470 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 23:06:27.158463  288470 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:27.158532  288470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:27.365605  288470 api_server.go:72] duration metric: took 770.561089ms to wait for apiserver process to appear ...
	I1210 23:06:27.365633  288470 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:27.365669  288470 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:27.370878  288470 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:06:27.371783  288470 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 23:06:27.371808  288470 api_server.go:131] duration metric: took 6.167307ms to wait for apiserver health ...
	I1210 23:06:27.371819  288470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:27.374222  288470 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:06:27.374798  288470 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:27.374825  288470 system_pods.go:61] "coredns-7d764666f9-nlx4t" [2f260fe5-0362-419b-9fa7-b773b56a74f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:27.374832  288470 system_pods.go:61] "etcd-newest-cni-852445" [09281ba7-a26f-4bfc-b2ec-81fc85f323e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:27.374839  288470 system_pods.go:61] "kindnet-qnlhj" [6573bdb3-e42a-41f9-b284-370c54e28aec] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:27.374846  288470 system_pods.go:61] "kube-apiserver-newest-cni-852445" [22610c50-364e-4ad1-b58d-a7a410acad6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:27.374853  288470 system_pods.go:61] "kube-controller-manager-newest-cni-852445" [1fea0a39-fcaa-43aa-9d98-c5c85bf53fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:27.374861  288470 system_pods.go:61] "kube-proxy-b8hgz" [28018116-263f-4460-bef3-54ee0930fde9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:27.374871  288470 system_pods.go:61] "kube-scheduler-newest-cni-852445" [a16c64c2-4c89-4989-9327-827fa77eff6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:27.374881  288470 system_pods.go:61] "storage-provisioner" [4a2e7f71-19fc-4f51-a7ae-a9a487663a80] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:27.374890  288470 system_pods.go:74] duration metric: took 3.062524ms to wait for pod list to return data ...
	I1210 23:06:27.374901  288470 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:27.375503  288470 addons.go:530] duration metric: took 780.148676ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:06:27.377242  288470 default_sa.go:45] found service account: "default"
	I1210 23:06:27.377258  288470 default_sa.go:55] duration metric: took 2.351089ms for default service account to be created ...
	I1210 23:06:27.377267  288470 kubeadm.go:587] duration metric: took 782.228839ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 23:06:27.377288  288470 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:27.379399  288470 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:27.379420  288470 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:27.379434  288470 node_conditions.go:105] duration metric: took 2.139966ms to run NodePressure ...
	I1210 23:06:27.379455  288470 start.go:242] waiting for startup goroutines ...
	I1210 23:06:27.661814  288470 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-852445" context rescaled to 1 replicas
	I1210 23:06:27.661851  288470 start.go:247] waiting for cluster config update ...
	I1210 23:06:27.661874  288470 start.go:256] writing updated cluster config ...
	I1210 23:06:27.662142  288470 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:27.710593  288470 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:06:27.712680  288470 out.go:179] * Done! kubectl is now configured to use "newest-cni-852445" cluster and "default" namespace by default
	I1210 23:06:23.536802  291593 out.go:252]   - Booting up control plane ...
	I1210 23:06:23.536960  291593 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:06:23.537095  291593 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:06:23.537200  291593 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:06:23.552829  291593 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:06:23.552924  291593 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:06:23.560220  291593 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:06:23.560492  291593 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:06:23.560557  291593 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:06:23.663086  291593 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:06:23.663225  291593 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:06:24.664362  291593 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001393953s
	I1210 23:06:24.667260  291593 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:06:24.667403  291593 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 23:06:24.667523  291593 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:06:24.667608  291593 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:06:26.924502  291593 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.256702468s
	I1210 23:06:27.250636  291593 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.583195254s
	
	
	==> CRI-O <==
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.792089113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.80581149Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=beb5ab05-72e6-4607-98df-ef0fff8c147f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.816059263Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.820577875Z" level=info msg="Ran pod sandbox 31b172bbe9897cda88c93f46e099bc766c159a36df7806f6707c95765f018610 with infra container: kube-system/kube-proxy-b8hgz/POD" id=beb5ab05-72e6-4607-98df-ef0fff8c147f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.823245789Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=03928ae0-4224-4e91-832a-a96aa57ca8e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.826951895Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f73b892b-214a-4c81-ba98-9ff1de4dbd15 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.835211933Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.83623867Z" level=info msg="Ran pod sandbox 7036f65aa118b5caaee9bd31876a2d342ca7b6ce07988e92e4b255bd267bc0d2 with infra container: kube-system/kindnet-qnlhj/POD" id=03928ae0-4224-4e91-832a-a96aa57ca8e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.838360993Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=79a07496-4e3a-4b5b-a809-03d1ea2b2fe4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.843032203Z" level=info msg="Creating container: kube-system/kube-proxy-b8hgz/kube-proxy" id=9d38dfaf-5a1a-4279-b4b0-1bb3edc379cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.84351573Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=02d751fb-910d-4253-9762-fc60ed85d6d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.847466303Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c46b2a63-5f23-4a03-8bdf-b72156fb5449 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.849325233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.858379329Z" level=info msg="Creating container: kube-system/kindnet-qnlhj/kindnet-cni" id=e5170e07-be72-4fb2-b38d-e9989c1cfbfa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.8586747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.873433259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.874334092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.875769745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.877681293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.941033803Z" level=info msg="Created container 9e96320d26cf67c06cba646347095b9929d8b360f36078ab1d3690b81bdae3bf: kube-system/kindnet-qnlhj/kindnet-cni" id=e5170e07-be72-4fb2-b38d-e9989c1cfbfa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.94374989Z" level=info msg="Starting container: 9e96320d26cf67c06cba646347095b9929d8b360f36078ab1d3690b81bdae3bf" id=9127e583-810a-45cc-963b-76b7f122d442 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.948724333Z" level=info msg="Started container" PID=1568 containerID=9e96320d26cf67c06cba646347095b9929d8b360f36078ab1d3690b81bdae3bf description=kube-system/kindnet-qnlhj/kindnet-cni id=9127e583-810a-45cc-963b-76b7f122d442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7036f65aa118b5caaee9bd31876a2d342ca7b6ce07988e92e4b255bd267bc0d2
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.962585233Z" level=info msg="Created container 68d4d70ae8081afc5225bf5e710e444a7beda98a11937fedc076b9184591085b: kube-system/kube-proxy-b8hgz/kube-proxy" id=9d38dfaf-5a1a-4279-b4b0-1bb3edc379cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.970014656Z" level=info msg="Starting container: 68d4d70ae8081afc5225bf5e710e444a7beda98a11937fedc076b9184591085b" id=d4f3bf65-4965-475c-a5a5-6e43f1e65439 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:26 newest-cni-852445 crio[772]: time="2025-12-10T23:06:26.98010572Z" level=info msg="Started container" PID=1572 containerID=68d4d70ae8081afc5225bf5e710e444a7beda98a11937fedc076b9184591085b description=kube-system/kube-proxy-b8hgz/kube-proxy id=d4f3bf65-4965-475c-a5a5-6e43f1e65439 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31b172bbe9897cda88c93f46e099bc766c159a36df7806f6707c95765f018610
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	68d4d70ae8081       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   31b172bbe9897       kube-proxy-b8hgz                            kube-system
	9e96320d26cf6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   7036f65aa118b       kindnet-qnlhj                               kube-system
	2eb59ce484d4a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   16f0d236adee9       kube-apiserver-newest-cni-852445            kube-system
	177361f17fffd       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   a4ad24b0283c8       kube-scheduler-newest-cni-852445            kube-system
	77026a67ba1c4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   adb3e05d378fc       etcd-newest-cni-852445                      kube-system
	aaac677430823       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   ffc7a241406cd       kube-controller-manager-newest-cni-852445   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-852445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=newest-cni-852445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:06:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:06:21 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:06:21 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:06:21 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 23:06:21 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852445
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                0c48784b-8da6-4402-a03e-1f05808f1702
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852445                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10s
	  kube-system                 kindnet-qnlhj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-852445             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-852445    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-b8hgz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-852445             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-852445 event: Registered Node newest-cni-852445 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [77026a67ba1c47bd954219fc8ed2b68325af984498a7db682774454820cea9cd] <==
	{"level":"warn","ts":"2025-12-10T23:06:17.873831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.884243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.891627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.899563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.906155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.913909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.921209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.928174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.935732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.946801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.954636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.961733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.969126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.976582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.983463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.989982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:17.998180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.005045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.012544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.019517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.040985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.048853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.057076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.063944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:18.115022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:06:29 up 48 min,  0 user,  load average: 4.04, 2.93, 1.90
	Linux newest-cni-852445 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9e96320d26cf67c06cba646347095b9929d8b360f36078ab1d3690b81bdae3bf] <==
	I1210 23:06:27.253693       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:27.254149       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:06:27.254305       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:27.254345       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:27.254381       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:27.459510       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:27.459561       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:27.459590       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:27.460207       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:06:27.750491       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:06:27.750522       1 metrics.go:72] Registering metrics
	I1210 23:06:27.750613       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [2eb59ce484d4a266d53e207fbd4200ea94cc7bee9c13d65ee0f466c62061798c] <==
	I1210 23:06:18.608492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:06:18.620099       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 23:06:18.620147       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:06:18.620158       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:06:18.620165       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:06:18.620171       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:06:18.796837       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:19.498420       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 23:06:19.504229       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 23:06:19.504250       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:06:20.013618       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:20.054494       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:20.105230       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 23:06:20.111394       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1210 23:06:20.112631       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:20.116902       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:06:20.545420       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:21.027877       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:21.038971       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 23:06:21.047088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 23:06:26.448222       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 23:06:26.448223       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 23:06:26.500304       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:06:26.506039       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:06:26.553892       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [aaac677430823402f3a06080edafacab88cb189bd1cef505930b0ca34ffc9a69] <==
	I1210 23:06:25.358934       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.359305       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.359442       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:25.363884       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.363929       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.364002       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.365550       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.367220       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.367314       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.367417       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369192       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369273       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369305       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369352       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369407       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369426       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-852445" podCIDRs=["10.42.0.0/24"]
	I1210 23:06:25.369466       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.369491       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.371444       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.371618       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.371696       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.450717       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:25.450740       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:06:25.450746       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:06:25.459838       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [68d4d70ae8081afc5225bf5e710e444a7beda98a11937fedc076b9184591085b] <==
	I1210 23:06:27.041859       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:27.141313       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:27.243709       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:27.243805       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 23:06:27.243968       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:27.273106       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:27.273167       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:06:27.279938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:27.280275       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:06:27.280296       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:27.281625       1 config.go:200] "Starting service config controller"
	I1210 23:06:27.281663       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:27.281697       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:27.281706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:27.281723       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:27.281731       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:27.281812       1 config.go:309] "Starting node config controller"
	I1210 23:06:27.281876       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:27.281914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:27.382001       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:06:27.382009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:27.382036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [177361f17fffd1d047806b3cf7929a9a2135198d3d2c193c0f512e6e21bf7d5e] <==
	E1210 23:06:18.566450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 23:06:18.566534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 23:06:18.566562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 23:06:18.566671       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 23:06:18.566731       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1210 23:06:18.566732       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 23:06:19.404373       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1210 23:06:19.405322       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 23:06:19.502010       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1210 23:06:19.503221       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 23:06:19.516443       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 23:06:19.517540       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 23:06:19.574352       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 23:06:19.575469       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 23:06:19.600622       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 23:06:19.601697       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 23:06:19.690242       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 23:06:19.691258       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 23:06:19.718373       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1210 23:06:19.719412       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 23:06:19.738674       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1210 23:06:19.739561       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1210 23:06:19.739679       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1210 23:06:19.740515       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	I1210 23:06:20.159036       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:06:21 newest-cni-852445 kubelet[1307]: I1210 23:06:21.973790    1307 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-852445" podStartSLOduration=1.9737848580000001 podStartE2EDuration="1.973784858s" podCreationTimestamp="2025-12-10 23:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:06:21.961488915 +0000 UTC m=+1.169107477" watchObservedRunningTime="2025-12-10 23:06:21.973784858 +0000 UTC m=+1.181403426"
	Dec 10 23:06:21 newest-cni-852445 kubelet[1307]: I1210 23:06:21.994464    1307 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-852445" podStartSLOduration=1.994443519 podStartE2EDuration="1.994443519s" podCreationTimestamp="2025-12-10 23:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:06:21.983248466 +0000 UTC m=+1.190867051" watchObservedRunningTime="2025-12-10 23:06:21.994443519 +0000 UTC m=+1.202062102"
	Dec 10 23:06:22 newest-cni-852445 kubelet[1307]: E1210 23:06:22.900152    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-852445" containerName="etcd"
	Dec 10 23:06:22 newest-cni-852445 kubelet[1307]: E1210 23:06:22.900239    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:22 newest-cni-852445 kubelet[1307]: E1210 23:06:22.900381    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:22 newest-cni-852445 kubelet[1307]: E1210 23:06:22.900456    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:23 newest-cni-852445 kubelet[1307]: E1210 23:06:23.902004    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:23 newest-cni-852445 kubelet[1307]: E1210 23:06:23.902081    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-852445" containerName="etcd"
	Dec 10 23:06:23 newest-cni-852445 kubelet[1307]: E1210 23:06:23.902240    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:24 newest-cni-852445 kubelet[1307]: E1210 23:06:24.577320    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:24 newest-cni-852445 kubelet[1307]: E1210 23:06:24.903435    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:25 newest-cni-852445 kubelet[1307]: I1210 23:06:25.413185    1307 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 10 23:06:25 newest-cni-852445 kubelet[1307]: I1210 23:06:25.414067    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 23:06:25 newest-cni-852445 kubelet[1307]: E1210 23:06:25.906228    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.502908    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28018116-263f-4460-bef3-54ee0930fde9-kube-proxy\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503091    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-xtables-lock\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503148    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-lib-modules\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503179    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-cni-cfg\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503239    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-lib-modules\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503265    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-xtables-lock\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503342    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtpq\" (UniqueName: \"kubernetes.io/projected/28018116-263f-4460-bef3-54ee0930fde9-kube-api-access-wdtpq\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: I1210 23:06:26.503374    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8dn\" (UniqueName: \"kubernetes.io/projected/6573bdb3-e42a-41f9-b284-370c54e28aec-kube-api-access-7g8dn\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:26 newest-cni-852445 kubelet[1307]: E1210 23:06:26.772159    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:27 newest-cni-852445 kubelet[1307]: E1210 23:06:27.312491    1307 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-852445" containerName="etcd"
	Dec 10 23:06:27 newest-cni-852445 kubelet[1307]: I1210 23:06:27.939553    1307 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-qnlhj" podStartSLOduration=1.9395312850000002 podStartE2EDuration="1.939531285s" podCreationTimestamp="2025-12-10 23:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 23:06:27.939437529 +0000 UTC m=+7.147056108" watchObservedRunningTime="2025-12-10 23:06:27.939531285 +0000 UTC m=+7.147149863"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-852445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-nlx4t storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner: exit status 1 (66.59859ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nlx4t" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-852445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-852445 --alsologtostderr -v=1: exit status 80 (1.890215464s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-852445 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:06:44.970420  304888 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:44.970812  304888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:44.970823  304888 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:44.970829  304888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:44.971159  304888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:44.971503  304888 out.go:368] Setting JSON to false
	I1210 23:06:44.971530  304888 mustload.go:66] Loading cluster: newest-cni-852445
	I1210 23:06:44.972080  304888 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:44.972633  304888 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:45.000411  304888 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:45.000757  304888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:45.087589  304888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-10 23:06:45.07379661 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:45.090221  304888 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:newest-cni-852445 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=false) vm
-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:06:45.092673  304888 out.go:179] * Pausing node newest-cni-852445 ... 
	I1210 23:06:45.095205  304888 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:45.095615  304888 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:45.095727  304888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:45.123421  304888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:45.233854  304888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:45.247679  304888 pause.go:52] kubelet running: true
	I1210 23:06:45.247758  304888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:06:45.431804  304888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:06:45.431917  304888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:06:45.558503  304888 cri.go:89] found id: "3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90"
	I1210 23:06:45.558530  304888 cri.go:89] found id: "818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c"
	I1210 23:06:45.558535  304888 cri.go:89] found id: "03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c"
	I1210 23:06:45.558540  304888 cri.go:89] found id: "d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef"
	I1210 23:06:45.558544  304888 cri.go:89] found id: "e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea"
	I1210 23:06:45.558548  304888 cri.go:89] found id: "3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189"
	I1210 23:06:45.558552  304888 cri.go:89] found id: ""
	I1210 23:06:45.558604  304888 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:06:45.576504  304888 retry.go:31] will retry after 358.759457ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:45Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:45.935857  304888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:45.953121  304888 pause.go:52] kubelet running: false
	I1210 23:06:45.953200  304888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:06:46.114766  304888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:06:46.114978  304888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:06:46.203812  304888 cri.go:89] found id: "3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90"
	I1210 23:06:46.203837  304888 cri.go:89] found id: "818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c"
	I1210 23:06:46.203842  304888 cri.go:89] found id: "03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c"
	I1210 23:06:46.203847  304888 cri.go:89] found id: "d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef"
	I1210 23:06:46.203851  304888 cri.go:89] found id: "e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea"
	I1210 23:06:46.203856  304888 cri.go:89] found id: "3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189"
	I1210 23:06:46.203860  304888 cri.go:89] found id: ""
	I1210 23:06:46.203905  304888 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:06:46.220446  304888 retry.go:31] will retry after 301.893723ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:46Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:46.522970  304888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:46.536630  304888 pause.go:52] kubelet running: false
	I1210 23:06:46.536706  304888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:06:46.658038  304888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:06:46.658133  304888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:06:46.739733  304888 cri.go:89] found id: "3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90"
	I1210 23:06:46.739762  304888 cri.go:89] found id: "818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c"
	I1210 23:06:46.739769  304888 cri.go:89] found id: "03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c"
	I1210 23:06:46.739775  304888 cri.go:89] found id: "d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef"
	I1210 23:06:46.739781  304888 cri.go:89] found id: "e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea"
	I1210 23:06:46.739787  304888 cri.go:89] found id: "3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189"
	I1210 23:06:46.739792  304888 cri.go:89] found id: ""
	I1210 23:06:46.739846  304888 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:06:46.755881  304888 out.go:203] 
	W1210 23:06:46.757110  304888 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:06:46.757131  304888 out.go:285] * 
	* 
	W1210 23:06:46.761444  304888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:06:46.763675  304888 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-852445 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-852445
helpers_test.go:244: (dbg) docker inspect newest-cni-852445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	        "Created": "2025-12-10T23:06:06.670702652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300172,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:32.801683807Z",
	            "FinishedAt": "2025-12-10T23:06:31.88097771Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hosts",
	        "LogPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6-json.log",
	        "Name": "/newest-cni-852445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-852445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	                "LowerDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852445",
	                "Source": "/var/lib/docker/volumes/newest-cni-852445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852445",
	                "name.minikube.sigs.k8s.io": "newest-cni-852445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3b895c575692b357a1a30963511ee2fb77a07ef9b3a0db4e74f2c6e75af3d27e",
	            "SandboxKey": "/var/run/docker/netns/3b895c575692",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-852445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb4831c90c0ce0c1b95239c1b5316571db21d4f34ab86f2f57dcd68970eb2faf",
	                    "EndpointID": "40234ebd2b9cd58eb52a15a065b9f7da06a2a33e1df2085bccda05ce2f000a0e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "76:f4:61:ce:9e:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852445",
	                        "a578a88253cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445: exit status 2 (342.981902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-852445 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-852445 logs -n 25: (1.067719585s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p embed-certs-468067 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-443884 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-468067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p newest-cni-852445 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-852445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-443884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ image   │ newest-cni-852445 image list --format=json                                                                                                                                                                                                           │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ pause   │ -p newest-cni-852445 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:33.998397  300940 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:33.998736  300940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:33.998745  300940 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:33.998751  300940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:33.999066  300940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:33.999688  300940 out.go:368] Setting JSON to false
	I1210 23:06:34.001279  300940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2936,"bootTime":1765405058,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:34.001394  300940 start.go:143] virtualization: kvm guest
	I1210 23:06:34.006832  300940 out.go:179] * [default-k8s-diff-port-443884] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:34.008506  300940 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:34.008731  300940 notify.go:221] Checking for updates...
	I1210 23:06:34.011505  300940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:34.012883  300940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:34.014143  300940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:34.015147  300940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:34.016780  300940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:34.019612  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:34.020332  300940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:34.051068  300940 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:34.051183  300940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:34.128889  300940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-10 23:06:34.116849861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:34.129028  300940 docker.go:319] overlay module found
	I1210 23:06:34.132198  300940 out.go:179] * Using the docker driver based on existing profile
	I1210 23:06:34.133592  300940 start.go:309] selected driver: docker
	I1210 23:06:34.133609  300940 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-4438
84 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:34.133757  300940 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:34.134502  300940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:34.231966  300940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-10 23:06:34.216116371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:34.232308  300940 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:34.232344  300940 cni.go:84] Creating CNI manager for ""
	I1210 23:06:34.232422  300940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:34.232481  300940 start.go:353] cluster config:
	{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:34.235348  300940 out.go:179] * Starting "default-k8s-diff-port-443884" primary control-plane node in "default-k8s-diff-port-443884" cluster
	I1210 23:06:34.236627  300940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:34.237856  300940 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:34.238883  300940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:34.238933  300940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:34.238933  300940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:34.238946  300940 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:34.239069  300940 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:34.239095  300940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:06:34.239238  300940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:06:34.264707  300940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:34.264734  300940 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:34.264756  300940 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:34.264794  300940 start.go:360] acquireMachinesLock for default-k8s-diff-port-443884: {Name:mk4710330ecf7371e663f4e39eab0b9ebe0090d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:34.264878  300940 start.go:364] duration metric: took 46.267µs to acquireMachinesLock for "default-k8s-diff-port-443884"
	I1210 23:06:34.264904  300940 start.go:96] Skipping create...Using existing machine configuration
	I1210 23:06:34.264914  300940 fix.go:54] fixHost starting: 
	I1210 23:06:34.265201  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:34.289087  300940 fix.go:112] recreateIfNeeded on default-k8s-diff-port-443884: state=Stopped err=<nil>
	W1210 23:06:34.289137  300940 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 23:06:33.423510  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:33.922992  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:34.423193  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:34.923161  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:35.423484  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:35.504421  291593 kubeadm.go:1114] duration metric: took 5.164791479s to wait for elevateKubeSystemPrivileges
	I1210 23:06:35.504460  291593 kubeadm.go:403] duration metric: took 15.586958934s to StartCluster
	I1210 23:06:35.504480  291593 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:35.504552  291593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:35.506076  291593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:35.506578  291593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:06:35.506587  291593 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:35.506686  291593 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:35.506784  291593 addons.go:70] Setting storage-provisioner=true in profile "auto-177285"
	I1210 23:06:35.506810  291593 addons.go:239] Setting addon storage-provisioner=true in "auto-177285"
	I1210 23:06:35.506816  291593 addons.go:70] Setting default-storageclass=true in profile "auto-177285"
	I1210 23:06:35.506846  291593 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-177285"
	I1210 23:06:35.506854  291593 host.go:66] Checking if "auto-177285" exists ...
	I1210 23:06:35.506786  291593 config.go:182] Loaded profile config "auto-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:35.507283  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.507417  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.508993  291593 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:35.510323  291593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:35.530601  291593 addons.go:239] Setting addon default-storageclass=true in "auto-177285"
	I1210 23:06:35.530637  291593 host.go:66] Checking if "auto-177285" exists ...
	I1210 23:06:35.530985  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.534850  291593 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:33.481611  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:33.481636  296906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:33.481707  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:06:33.513785  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.513877  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.513955  296906 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:33.513973  296906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:33.514036  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:06:33.553849  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.648218  296906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:33.661420  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:33.661442  296906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:33.663696  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:33.666687  296906 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:06:33.676898  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:33.681220  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:33.681245  296906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:33.703497  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:33.703671  296906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:33.732748  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:33.732777  296906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:33.753437  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:33.753459  296906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:33.771137  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:33.771160  296906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:33.789740  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:33.789763  296906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:33.807416  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:33.807441  296906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:33.825503  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:33.825530  296906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:33.845553  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:34.976090  296906 node_ready.go:49] node "embed-certs-468067" is "Ready"
	I1210 23:06:34.976125  296906 node_ready.go:38] duration metric: took 1.309405149s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:06:34.976141  296906 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:34.976199  296906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:35.581404  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917669318s)
	I1210 23:06:35.581496  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.735898772s)
	I1210 23:06:35.581771  296906 api_server.go:72] duration metric: took 2.137049672s to wait for apiserver process to appear ...
	I1210 23:06:35.581786  296906 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:35.581808  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:35.582339  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.903659584s)
	I1210 23:06:35.584247  296906 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-468067 addons enable metrics-server
	
	I1210 23:06:35.588061  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:35.588085  296906 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:35.601132  296906 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 23:06:35.602513  296906 addons.go:530] duration metric: took 2.157749868s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 23:06:35.537350  291593 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:35.537373  291593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:35.537454  291593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-177285
	I1210 23:06:35.558820  291593 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:35.558852  291593 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:35.558914  291593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-177285
	I1210 23:06:35.570170  291593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa Username:docker}
	I1210 23:06:35.585379  291593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa Username:docker}
	I1210 23:06:35.617614  291593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:06:35.656684  291593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:35.693853  291593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:35.711281  291593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:35.835627  291593 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1210 23:06:35.837105  291593 node_ready.go:35] waiting up to 15m0s for node "auto-177285" to be "Ready" ...
	I1210 23:06:36.131614  291593 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:06:32.775527  299857 out.go:252] * Restarting existing docker container for "newest-cni-852445" ...
	I1210 23:06:32.775599  299857 cli_runner.go:164] Run: docker start newest-cni-852445
	I1210 23:06:33.068082  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:33.093693  299857 kic.go:430] container "newest-cni-852445" state is running.
	I1210 23:06:33.094145  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:33.119435  299857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json ...
	I1210 23:06:33.119708  299857 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:33.119765  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:33.149402  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:33.149823  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:33.149861  299857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:33.150957  299857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45672->127.0.0.1:33104: read: connection reset by peer
	I1210 23:06:36.297894  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:36.297919  299857 ubuntu.go:182] provisioning hostname "newest-cni-852445"
	I1210 23:06:36.297971  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.316747  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:36.316975  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:36.316989  299857 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852445 && echo "newest-cni-852445" | sudo tee /etc/hostname
	I1210 23:06:36.468525  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:36.468611  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.489200  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:36.489471  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:36.489507  299857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:06:36.629240  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:06:36.629269  299857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:06:36.629291  299857 ubuntu.go:190] setting up certificates
	I1210 23:06:36.629317  299857 provision.go:84] configureAuth start
	I1210 23:06:36.629376  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:36.652138  299857 provision.go:143] copyHostCerts
	I1210 23:06:36.652215  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:06:36.652229  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:06:36.652318  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:06:36.652462  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:06:36.652478  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:06:36.652522  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:06:36.652622  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:06:36.652635  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:06:36.652704  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:06:36.652790  299857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852445]
	I1210 23:06:36.895604  299857 provision.go:177] copyRemoteCerts
	I1210 23:06:36.895667  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:06:36.895709  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.914011  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.011012  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:06:37.029088  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:06:37.046901  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:06:37.064657  299857 provision.go:87] duration metric: took 435.311642ms to configureAuth
	I1210 23:06:37.064687  299857 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:06:37.064898  299857 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:37.065009  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.085208  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:37.085461  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:37.085486  299857 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:06:37.398288  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:06:37.398316  299857 machine.go:97] duration metric: took 4.278597254s to provisionDockerMachine
	I1210 23:06:37.398341  299857 start.go:293] postStartSetup for "newest-cni-852445" (driver="docker")
	I1210 23:06:37.398358  299857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:06:37.398438  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:06:37.398494  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.416961  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.512920  299857 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:06:37.516847  299857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:06:37.516875  299857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:06:37.516889  299857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:06:37.516951  299857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:06:37.517048  299857 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:06:37.517191  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:06:37.525142  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:37.543049  299857 start.go:296] duration metric: took 144.690522ms for postStartSetup
	I1210 23:06:37.543139  299857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:06:37.543189  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.132978  291593 addons.go:530] duration metric: took 626.290446ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:06:36.339624  291593 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-177285" context rescaled to 1 replicas
	W1210 23:06:37.840242  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	I1210 23:06:37.562251  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.655787  299857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:06:37.660178  299857 fix.go:56] duration metric: took 4.907194227s for fixHost
	I1210 23:06:37.660199  299857 start.go:83] releasing machines lock for "newest-cni-852445", held for 4.90723805s
	I1210 23:06:37.660250  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:37.677781  299857 ssh_runner.go:195] Run: cat /version.json
	I1210 23:06:37.677837  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.677877  299857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:06:37.677948  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.696317  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.697840  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.848157  299857 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:37.854561  299857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:06:37.891859  299857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:06:37.897193  299857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:06:37.897267  299857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:06:37.905531  299857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:06:37.905561  299857 start.go:496] detecting cgroup driver to use...
	I1210 23:06:37.905593  299857 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:06:37.905640  299857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:06:37.920531  299857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:06:37.932874  299857 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:06:37.932931  299857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:06:37.950688  299857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:06:37.963712  299857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:06:38.047401  299857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:06:38.137777  299857 docker.go:234] disabling docker service ...
	I1210 23:06:38.137848  299857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:06:38.153421  299857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:06:38.166438  299857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:06:38.270774  299857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:06:38.362303  299857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:06:38.376136  299857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:06:38.392159  299857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:06:38.392215  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.402813  299857 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:06:38.402883  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.419583  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.430604  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.439927  299857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:06:38.451796  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.461382  299857 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.471234  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.480950  299857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:06:38.489106  299857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:06:38.497968  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:38.589223  299857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:06:38.724489  299857 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:06:38.724549  299857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:06:38.728591  299857 start.go:564] Will wait 60s for crictl version
	I1210 23:06:38.728677  299857 ssh_runner.go:195] Run: which crictl
	I1210 23:06:38.732583  299857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:06:38.759011  299857 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:06:38.759092  299857 ssh_runner.go:195] Run: crio --version
	I1210 23:06:38.791727  299857 ssh_runner.go:195] Run: crio --version
	I1210 23:06:38.822752  299857 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 23:06:38.824759  299857 cli_runner.go:164] Run: docker network inspect newest-cni-852445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:38.845157  299857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 23:06:38.850199  299857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:38.863435  299857 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 23:06:34.294290  300940 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-443884" ...
	I1210 23:06:34.294396  300940 cli_runner.go:164] Run: docker start default-k8s-diff-port-443884
	I1210 23:06:34.649842  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:34.672994  300940 kic.go:430] container "default-k8s-diff-port-443884" state is running.
	I1210 23:06:34.673891  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:34.699059  300940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:06:34.699337  300940 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:34.699413  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:34.734965  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:34.735279  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:34.735295  300940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:34.735888  300940 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57772->127.0.0.1:33109: read: connection reset by peer
	I1210 23:06:37.871279  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:06:37.871301  300940 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-443884"
	I1210 23:06:37.871359  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:37.891431  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:37.891751  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:37.891777  300940 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-443884 && echo "default-k8s-diff-port-443884" | sudo tee /etc/hostname
	I1210 23:06:38.040404  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:06:38.040539  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.060025  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:38.060271  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:38.060297  300940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-443884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-443884/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-443884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:06:38.207950  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:06:38.207983  300940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:06:38.208029  300940 ubuntu.go:190] setting up certificates
	I1210 23:06:38.208053  300940 provision.go:84] configureAuth start
	I1210 23:06:38.208185  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:38.231137  300940 provision.go:143] copyHostCerts
	I1210 23:06:38.231222  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:06:38.231245  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:06:38.231315  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:06:38.231434  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:06:38.231446  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:06:38.231477  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:06:38.231547  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:06:38.231558  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:06:38.231583  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:06:38.231659  300940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-443884 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-443884 localhost minikube]
	I1210 23:06:38.317400  300940 provision.go:177] copyRemoteCerts
	I1210 23:06:38.317453  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:06:38.317485  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.335820  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:38.434893  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:06:38.456809  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 23:06:38.476483  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:06:38.495869  300940 provision.go:87] duration metric: took 287.784765ms to configureAuth
	I1210 23:06:38.495899  300940 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:06:38.496123  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:38.496253  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.515948  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:38.516170  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:38.516183  300940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:06:38.845931  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:06:38.845959  300940 machine.go:97] duration metric: took 4.146605033s to provisionDockerMachine
	I1210 23:06:38.845974  300940 start.go:293] postStartSetup for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:06:38.845987  300940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:06:38.846060  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:06:38.846115  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.866867  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:38.970298  300940 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:06:38.973902  300940 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:06:38.973932  300940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:06:38.973946  300940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:06:38.973994  300940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:06:38.974092  300940 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:06:38.974213  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:06:38.982188  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:38.864740  299857 kubeadm.go:884] updating cluster {Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:06:38.864907  299857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:06:38.864962  299857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:38.900681  299857 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:38.900707  299857 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:06:38.900763  299857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:38.938500  299857 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:38.938525  299857 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:06:38.938534  299857 kubeadm.go:935] updating node { 192.168.85.2  8443 v1.35.0-beta.0 crio true true} ...
	I1210 23:06:38.938668  299857 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:06:38.938752  299857 ssh_runner.go:195] Run: crio config
	I1210 23:06:38.987572  299857 cni.go:84] Creating CNI manager for ""
	I1210 23:06:38.987604  299857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:38.987629  299857 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 23:06:38.987672  299857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852445 NodeName:newest-cni-852445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:06:38.987836  299857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852445"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:06:38.987933  299857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:06:38.997531  299857 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:06:38.997599  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:06:39.006142  299857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 23:06:39.021418  299857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 23:06:39.036980  299857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 23:06:39.051203  299857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:06:39.055136  299857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:39.065232  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:39.157189  299857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:39.184579  299857 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445 for IP: 192.168.85.2
	I1210 23:06:39.184602  299857 certs.go:195] generating shared ca certs ...
	I1210 23:06:39.184621  299857 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:39.184814  299857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:06:39.184910  299857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:06:39.184928  299857 certs.go:257] generating profile certs ...
	I1210 23:06:39.185032  299857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/client.key
	I1210 23:06:39.185095  299857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.key.948cca2b
	I1210 23:06:39.185149  299857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.key
	I1210 23:06:39.185272  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:06:39.185302  299857 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:06:39.185311  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:06:39.185337  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:06:39.185361  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:06:39.185393  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:06:39.185443  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:39.186533  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:06:39.207718  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:06:39.230169  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:06:39.255946  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:06:39.285516  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 23:06:39.305423  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:06:39.323436  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:06:39.344198  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:06:39.363470  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:06:39.384229  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:06:39.407810  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:06:39.425886  299857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:06:39.440443  299857 ssh_runner.go:195] Run: openssl version
	I1210 23:06:39.447086  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.454596  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:06:39.463311  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.468939  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.468999  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.511266  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:06:39.520211  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.528197  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:06:39.536388  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.540991  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.541054  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.600263  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:06:39.610730  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.620280  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:06:39.631063  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.635896  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.635958  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.692353  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:06:39.703873  299857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:06:39.711510  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:06:39.773521  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:06:39.834749  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:06:39.912533  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:06:39.974089  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:06:40.033417  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:06:40.087764  299857 kubeadm.go:401] StartCluster: {Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:40.087880  299857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:06:40.087934  299857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:06:40.141361  299857 cri.go:89] found id: "03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c"
	I1210 23:06:40.141390  299857 cri.go:89] found id: "d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef"
	I1210 23:06:40.141396  299857 cri.go:89] found id: "e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea"
	I1210 23:06:40.141401  299857 cri.go:89] found id: "3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189"
	I1210 23:06:40.141405  299857 cri.go:89] found id: ""
	I1210 23:06:40.141460  299857 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:06:40.157816  299857 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:40Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:40.157913  299857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:06:40.170134  299857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:06:40.170159  299857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:06:40.170206  299857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:06:40.181214  299857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:06:40.182332  299857 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852445" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:40.183223  299857 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-5100/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852445" cluster setting kubeconfig missing "newest-cni-852445" context setting]
	I1210 23:06:40.184334  299857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.186527  299857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:06:40.197522  299857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 23:06:40.197554  299857 kubeadm.go:602] duration metric: took 27.389565ms to restartPrimaryControlPlane
	I1210 23:06:40.197565  299857 kubeadm.go:403] duration metric: took 109.811149ms to StartCluster
	I1210 23:06:40.197582  299857 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.197640  299857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:40.200098  299857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.200360  299857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:40.200517  299857 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:40.200603  299857 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-852445"
	I1210 23:06:40.200619  299857 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-852445"
	W1210 23:06:40.200626  299857 addons.go:248] addon storage-provisioner should already be in state true
	I1210 23:06:40.200665  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.200709  299857 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:40.200754  299857 addons.go:70] Setting dashboard=true in profile "newest-cni-852445"
	I1210 23:06:40.200764  299857 addons.go:239] Setting addon dashboard=true in "newest-cni-852445"
	W1210 23:06:40.200771  299857 addons.go:248] addon dashboard should already be in state true
	I1210 23:06:40.200788  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.201151  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.201243  299857 addons.go:70] Setting default-storageclass=true in profile "newest-cni-852445"
	I1210 23:06:40.201278  299857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852445"
	I1210 23:06:40.201603  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.201772  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.207388  299857 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:40.210195  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.236267  299857 addons.go:239] Setting addon default-storageclass=true in "newest-cni-852445"
	W1210 23:06:40.236409  299857 addons.go:248] addon default-storageclass should already be in state true
	I1210 23:06:40.236464  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.237341  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.243205  299857 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 23:06:40.243208  299857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:40.244759  299857 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 23:06:39.001837  300940 start.go:296] duration metric: took 155.849393ms for postStartSetup
	I1210 23:06:39.001913  300940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:06:39.001967  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.021575  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.118660  300940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:06:39.123812  300940 fix.go:56] duration metric: took 4.858891888s for fixHost
	I1210 23:06:39.123840  300940 start.go:83] releasing machines lock for "default-k8s-diff-port-443884", held for 4.858948233s
	I1210 23:06:39.123908  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:39.143547  300940 ssh_runner.go:195] Run: cat /version.json
	I1210 23:06:39.143608  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.143616  300940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:06:39.143736  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.163406  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.164354  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.258513  300940 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:39.326555  300940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:06:39.366148  300940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:06:39.371076  300940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:06:39.371138  300940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:06:39.380621  300940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:06:39.380660  300940 start.go:496] detecting cgroup driver to use...
	I1210 23:06:39.380695  300940 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:06:39.380741  300940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:06:39.400236  300940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:06:39.413878  300940 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:06:39.413933  300940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:06:39.429873  300940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:06:39.444420  300940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:06:39.528197  300940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:06:39.644910  300940 docker.go:234] disabling docker service ...
	I1210 23:06:39.644973  300940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:06:39.665196  300940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:06:39.683257  300940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:06:39.805982  300940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:06:39.937108  300940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:06:39.956999  300940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:06:39.975619  300940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:06:39.975707  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:39.989238  300940 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:06:39.989306  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.001456  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.014158  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.026751  300940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:06:40.038413  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.050725  300940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.062566  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.075811  300940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:06:40.086724  300940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:06:40.097719  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.227325  300940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:06:40.452547  300940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:06:40.452612  300940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:06:40.457899  300940 start.go:564] Will wait 60s for crictl version
	I1210 23:06:40.457961  300940 ssh_runner.go:195] Run: which crictl
	I1210 23:06:40.462915  300940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:06:40.498177  300940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:06:40.498313  300940 ssh_runner.go:195] Run: crio --version
	I1210 23:06:40.535783  300940 ssh_runner.go:195] Run: crio --version
	I1210 23:06:40.575936  300940 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:06:36.082509  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:36.087836  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:36.087867  296906 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:36.582371  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:36.586521  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 23:06:36.587479  296906 api_server.go:141] control plane version: v1.34.2
	I1210 23:06:36.587504  296906 api_server.go:131] duration metric: took 1.005711086s to wait for apiserver health ...
	I1210 23:06:36.587512  296906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:36.591359  296906 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:36.591389  296906 system_pods.go:61] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:36.591396  296906 system_pods.go:61] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:36.591403  296906 system_pods.go:61] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:36.591409  296906 system_pods.go:61] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:36.591416  296906 system_pods.go:61] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:36.591422  296906 system_pods.go:61] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:36.591430  296906 system_pods.go:61] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:36.591435  296906 system_pods.go:61] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:36.591459  296906 system_pods.go:74] duration metric: took 3.941041ms to wait for pod list to return data ...
	I1210 23:06:36.591467  296906 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:36.594028  296906 default_sa.go:45] found service account: "default"
	I1210 23:06:36.594045  296906 default_sa.go:55] duration metric: took 2.5739ms for default service account to be created ...
	I1210 23:06:36.594053  296906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:06:36.596909  296906 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:36.596934  296906 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:36.596941  296906 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:36.596953  296906 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:36.596961  296906 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:36.596970  296906 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:36.596977  296906 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:36.596982  296906 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:36.596988  296906 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:36.596994  296906 system_pods.go:126] duration metric: took 2.936216ms to wait for k8s-apps to be running ...
	I1210 23:06:36.597004  296906 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:06:36.597041  296906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:36.610319  296906 system_svc.go:56] duration metric: took 13.303884ms WaitForService to wait for kubelet
	I1210 23:06:36.610352  296906 kubeadm.go:587] duration metric: took 3.165630309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:36.610376  296906 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:36.613420  296906 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:36.613446  296906 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:36.613462  296906 node_conditions.go:105] duration metric: took 3.081579ms to run NodePressure ...
	I1210 23:06:36.613472  296906 start.go:242] waiting for startup goroutines ...
	I1210 23:06:36.613479  296906 start.go:247] waiting for cluster config update ...
	I1210 23:06:36.613491  296906 start.go:256] writing updated cluster config ...
	I1210 23:06:36.613775  296906 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:36.617511  296906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:36.620922  296906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qw48c" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 23:06:38.626894  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	W1210 23:06:40.628411  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	I1210 23:06:40.577923  300940 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:40.600795  300940 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:06:40.605409  300940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:40.617631  300940 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:06:40.617820  300940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:40.617894  300940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:40.659901  300940 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:40.659927  300940 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:06:40.659982  300940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:40.691790  300940 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:40.691815  300940 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:06:40.691825  300940 kubeadm.go:935] updating node { 192.168.76.2  8444 v1.34.2 crio true true} ...
	I1210 23:06:40.691997  300940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-443884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:06:40.692098  300940 ssh_runner.go:195] Run: crio config
	I1210 23:06:40.752609  300940 cni.go:84] Creating CNI manager for ""
	I1210 23:06:40.752650  300940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:40.752670  300940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:06:40.752702  300940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-443884 NodeName:default-k8s-diff-port-443884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:06:40.752850  300940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-443884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:06:40.752925  300940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:06:40.763697  300940 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:06:40.763757  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:06:40.774128  300940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 23:06:40.791063  300940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:06:40.807363  300940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 23:06:40.823727  300940 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:06:40.828562  300940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:40.841823  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.954208  300940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:40.980761  300940 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884 for IP: 192.168.76.2
	I1210 23:06:40.980786  300940 certs.go:195] generating shared ca certs ...
	I1210 23:06:40.980830  300940 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.981045  300940 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:06:40.981136  300940 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:06:40.981152  300940 certs.go:257] generating profile certs ...
	I1210 23:06:40.981255  300940 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key
	I1210 23:06:40.981338  300940 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94
	I1210 23:06:40.981388  300940 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key
	I1210 23:06:40.981522  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:06:40.981557  300940 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:06:40.981566  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:06:40.981598  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:06:40.981627  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:06:40.981688  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:06:40.981745  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:40.982579  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:06:41.006398  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:06:41.029138  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:06:41.055979  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:06:41.090931  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 23:06:41.115519  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:06:41.143539  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:06:41.167967  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:06:41.191741  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:06:41.216094  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:06:41.238887  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:06:41.262280  300940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:06:41.279468  300940 ssh_runner.go:195] Run: openssl version
	I1210 23:06:41.288146  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.298581  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:06:41.309025  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.314246  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.314310  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.372078  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:06:41.383531  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.394581  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:06:41.404966  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.410479  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.410543  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.469741  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:06:41.480412  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.491482  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:06:41.502157  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.507486  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.507545  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.566047  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:06:41.576696  300940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:06:41.582841  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:06:41.638345  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:06:41.706395  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:06:41.769498  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:06:41.833360  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:06:41.892236  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:06:41.954681  300940 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:41.954804  300940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:06:41.954887  300940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:06:42.001767  300940 cri.go:89] found id: "26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1"
	I1210 23:06:42.001791  300940 cri.go:89] found id: "42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865"
	I1210 23:06:42.001797  300940 cri.go:89] found id: "ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2"
	I1210 23:06:42.001801  300940 cri.go:89] found id: "2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f"
	I1210 23:06:42.001806  300940 cri.go:89] found id: ""
	I1210 23:06:42.001849  300940 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:06:42.030546  300940 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:42.030716  300940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:06:42.044302  300940 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:06:42.044327  300940 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:06:42.044379  300940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:06:42.055126  300940 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:06:42.056570  300940 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-443884" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:42.057658  300940 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-5100/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-443884" cluster setting kubeconfig missing "default-k8s-diff-port-443884" context setting]
	I1210 23:06:42.059194  300940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.061908  300940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:06:42.079840  300940 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 23:06:42.079951  300940 kubeadm.go:602] duration metric: took 35.615134ms to restartPrimaryControlPlane
	I1210 23:06:42.079995  300940 kubeadm.go:403] duration metric: took 125.323825ms to StartCluster
	I1210 23:06:42.080047  300940 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.080161  300940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:42.084071  300940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.084536  300940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:42.084601  300940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:42.085200  300940 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085295  300940 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.085329  300940 addons.go:248] addon storage-provisioner should already be in state true
	I1210 23:06:42.085248  300940 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085473  300940 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.085481  300940 addons.go:248] addon dashboard should already be in state true
	I1210 23:06:42.085508  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.084796  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:42.085258  300940 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085942  300940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-443884"
	I1210 23:06:42.086277  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.086518  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.087242  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.086831  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.089972  300940 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:42.092776  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:42.123466  300940 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.123493  300940 addons.go:248] addon default-storageclass should already be in state true
	I1210 23:06:42.123523  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.123993  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.129156  300940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:42.130633  300940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:42.130666  300940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:42.130734  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.136811  300940 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 23:06:42.138347  300940 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 23:06:40.245565  299857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:40.245632  299857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:40.245725  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.248387  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:40.248407  299857 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:40.248696  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.277555  299857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:40.277578  299857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:40.277635  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.287112  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.288514  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.308721  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.378056  299857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:40.396028  299857 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:40.396108  299857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:40.414620  299857 api_server.go:72] duration metric: took 214.224289ms to wait for apiserver process to appear ...
	I1210 23:06:40.414665  299857 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:40.414688  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:40.419399  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:40.419428  299857 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:40.419623  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:40.432696  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:40.440049  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:40.440072  299857 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:40.458369  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:40.458393  299857 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:40.478235  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:40.478262  299857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:40.498123  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:40.498157  299857 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:40.515237  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:40.515265  299857 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:40.531579  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:40.531607  299857 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:40.548879  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:40.548906  299857 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:40.566434  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:40.566460  299857 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:40.584021  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:42.336809  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.336847  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.336865  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.388168  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.388276  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.415493  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.433716  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.433746  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.915189  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.928263  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:42.928320  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:43.299625  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.879943798s)
	I1210 23:06:43.299636  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.866915243s)
	I1210 23:06:43.299819  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.715756541s)
	I1210 23:06:43.302089  299857 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852445 addons enable metrics-server
	
	I1210 23:06:43.317451  299857 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 23:06:39.841062  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	W1210 23:06:41.841704  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	I1210 23:06:42.139758  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:42.139817  300940 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:42.139973  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.162604  300940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:42.162732  300940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:42.162828  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.165111  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.188810  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.194934  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.328003  300940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:42.349344  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:42.375531  300940 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:06:42.387691  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:42.387829  300940 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:42.428474  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:42.428506  300940 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:42.467773  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:42.484598  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:42.484680  300940 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:42.554689  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:42.554711  300940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:42.588602  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:42.588652  300940 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:42.619127  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:42.619149  300940 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:42.649837  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:42.649862  300940 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:42.674252  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:42.674281  300940 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:42.694637  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:42.694689  300940 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:42.723045  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:43.319077  299857 addons.go:530] duration metric: took 3.118537165s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 23:06:43.415395  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:43.420773  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:43.420800  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:43.915094  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:43.924140  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:06:43.925414  299857 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 23:06:43.925441  299857 api_server.go:131] duration metric: took 3.510768581s to wait for apiserver health ...
	I1210 23:06:43.925468  299857 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:43.932186  299857 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:43.932377  299857 system_pods.go:61] "coredns-7d764666f9-nlx4t" [2f260fe5-0362-419b-9fa7-b773b56a74f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:43.932426  299857 system_pods.go:61] "etcd-newest-cni-852445" [09281ba7-a26f-4bfc-b2ec-81fc85f323e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:43.932468  299857 system_pods.go:61] "kindnet-qnlhj" [6573bdb3-e42a-41f9-b284-370c54e28aec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:43.932505  299857 system_pods.go:61] "kube-apiserver-newest-cni-852445" [22610c50-364e-4ad1-b58d-a7a410acad6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:43.932537  299857 system_pods.go:61] "kube-controller-manager-newest-cni-852445" [1fea0a39-fcaa-43aa-9d98-c5c85bf53fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:43.932555  299857 system_pods.go:61] "kube-proxy-b8hgz" [28018116-263f-4460-bef3-54ee0930fde9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:43.932604  299857 system_pods.go:61] "kube-scheduler-newest-cni-852445" [a16c64c2-4c89-4989-9327-827fa77eff6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:43.932623  299857 system_pods.go:61] "storage-provisioner" [4a2e7f71-19fc-4f51-a7ae-a9a487663a80] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:43.932640  299857 system_pods.go:74] duration metric: took 7.164458ms to wait for pod list to return data ...
	I1210 23:06:43.932679  299857 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:43.939486  299857 default_sa.go:45] found service account: "default"
	I1210 23:06:43.939513  299857 default_sa.go:55] duration metric: took 6.82645ms for default service account to be created ...
	I1210 23:06:43.939529  299857 kubeadm.go:587] duration metric: took 3.739138349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 23:06:43.939549  299857 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:43.943435  299857 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:43.943526  299857 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:43.943563  299857 node_conditions.go:105] duration metric: took 4.008818ms to run NodePressure ...
	I1210 23:06:43.943589  299857 start.go:242] waiting for startup goroutines ...
	I1210 23:06:43.943610  299857 start.go:247] waiting for cluster config update ...
	I1210 23:06:43.943634  299857 start.go:256] writing updated cluster config ...
	I1210 23:06:43.943956  299857 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:44.018002  299857 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:06:44.020960  299857 out.go:179] * Done! kubectl is now configured to use "newest-cni-852445" cluster and "default" namespace by default
	I1210 23:06:44.343013  300940 node_ready.go:49] node "default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:44.343051  300940 node_ready.go:38] duration metric: took 1.967467282s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:06:44.343067  300940 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:44.343132  300940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:45.153751  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.685940335s)
	I1210 23:06:45.154107  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.431020687s)
	I1210 23:06:45.154401  300940 api_server.go:72] duration metric: took 3.069517223s to wait for apiserver process to appear ...
	I1210 23:06:45.154417  300940 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:45.154436  300940 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 23:06:45.154730  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.805349781s)
	I1210 23:06:45.156612  300940 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-443884 addons enable metrics-server
	
	I1210 23:06:45.161624  300940 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:45.161664  300940 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:45.166144  300940 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 23:06:42.636686  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	W1210 23:06:45.128721  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.587309438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.590870202Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3f6be5a7-85cd-4be2-97c5-8fd061b3e005 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.591438841Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c768b551-0486-4544-93eb-4d16ba906717 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.593070337Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.593718797Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.594885105Z" level=info msg="Ran pod sandbox 09c842d6094c8797613b5462917d431a99888d3cfdcb034e744df9acaff64af4 with infra container: kube-system/kube-proxy-b8hgz/POD" id=3f6be5a7-85cd-4be2-97c5-8fd061b3e005 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.595686319Z" level=info msg="Ran pod sandbox b9e3efecea7d5a6dab765f79bf491c5b60d32268d98e6c70cfb18f3d94b60dd5 with infra container: kube-system/kindnet-qnlhj/POD" id=c768b551-0486-4544-93eb-4d16ba906717 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.598095298Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=570b04be-60c1-476a-90ae-ff570cc2a11b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.598112673Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=795b4cca-6e36-4f4e-a724-21008e5755b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.599800309Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=354c8f2d-bdcd-4d6b-a365-a5c81f087f7e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.600921553Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=33b8fddd-c9bd-4d86-bbb2-362754996a35 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.602741368Z" level=info msg="Creating container: kube-system/kube-proxy-b8hgz/kube-proxy" id=e5d5ecc2-e4d7-41d6-8741-33975fba48f2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.602877743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.603340105Z" level=info msg="Creating container: kube-system/kindnet-qnlhj/kindnet-cni" id=de144ed7-65ab-4568-a4d5-ace508c26edf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.603504908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.61144688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.612387316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.612599983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.613252055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.656443462Z" level=info msg="Created container 3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90: kube-system/kindnet-qnlhj/kindnet-cni" id=de144ed7-65ab-4568-a4d5-ace508c26edf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.657475548Z" level=info msg="Starting container: 3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90" id=a1dc7609-b589-4a50-af38-dbd2c0d8fd74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.659745242Z" level=info msg="Started container" PID=1053 containerID=3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90 description=kube-system/kindnet-qnlhj/kindnet-cni id=a1dc7609-b589-4a50-af38-dbd2c0d8fd74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9e3efecea7d5a6dab765f79bf491c5b60d32268d98e6c70cfb18f3d94b60dd5
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.66537584Z" level=info msg="Created container 818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c: kube-system/kube-proxy-b8hgz/kube-proxy" id=e5d5ecc2-e4d7-41d6-8741-33975fba48f2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.666427814Z" level=info msg="Starting container: 818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c" id=c81172ab-9655-4f58-b716-2435c8328a1f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.670140874Z" level=info msg="Started container" PID=1054 containerID=818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c description=kube-system/kube-proxy-b8hgz/kube-proxy id=c81172ab-9655-4f58-b716-2435c8328a1f name=/runtime.v1.RuntimeService/StartContainer sandboxID=09c842d6094c8797613b5462917d431a99888d3cfdcb034e744df9acaff64af4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3c995f571ddc8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   b9e3efecea7d5       kindnet-qnlhj                               kube-system
	818179dd96eb8       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   09c842d6094c8       kube-proxy-b8hgz                            kube-system
	03e6e0b39697a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   2f596549ff3fe       kube-apiserver-newest-cni-852445            kube-system
	d827e3c942930       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   619f1521ac12c       kube-scheduler-newest-cni-852445            kube-system
	e9fc0c904d79f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   3e9e3f6edfd04       kube-controller-manager-newest-cni-852445   kube-system
	3927c2b5bd86d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   2d47b6688fece       etcd-newest-cni-852445                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-852445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=newest-cni-852445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:06:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852445
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                0c48784b-8da6-4402-a03e-1f05808f1702
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852445                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-qnlhj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-852445             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-852445    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-b8hgz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-852445             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-852445 event: Registered Node newest-cni-852445 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-852445 event: Registered Node newest-cni-852445 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189] <==
	{"level":"warn","ts":"2025-12-10T23:06:41.304854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.320605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.331283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.340884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.350219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.361263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.367255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.377328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.384694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.401180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.407093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.416029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.423900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.432196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.440240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.449551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.468225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.477555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.486418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.499328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.511991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.520860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.532362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.541941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.613894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33572","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:06:48 up 49 min,  0 user,  load average: 8.28, 3.92, 2.24
	Linux newest-cni-852445 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90] <==
	I1210 23:06:43.941078       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:43.941520       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:06:43.941716       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:43.941793       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:43.941822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:44.243732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:44.243867       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:44.243887       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:44.244067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:06:44.637061       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:06:44.637103       1 metrics.go:72] Registering metrics
	I1210 23:06:44.637201       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c] <==
	I1210 23:06:42.498843       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 23:06:42.501706       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 23:06:42.501809       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:06:42.501822       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:06:42.501830       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:06:42.501836       1 cache.go:39] Caches are synced for autoregister controller
	E1210 23:06:42.506845       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:06:42.525138       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:06:42.540723       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:42.540810       1 policy_source.go:248] refreshing policies
	I1210 23:06:42.589296       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:42.978321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:43.029536       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:43.063816       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:43.075220       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:43.090012       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:43.164733       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.105.251"}
	I1210 23:06:43.186722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.53.19"}
	I1210 23:06:43.287926       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:06:46.011867       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:46.011946       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:46.058464       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:06:46.108995       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:46.108993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:46.260927       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea] <==
	I1210 23:06:45.589227       1 range_allocator.go:177] "Sending events to api server"
	I1210 23:06:45.581419       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.589299       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-852445"
	I1210 23:06:45.589389       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 23:06:45.589397       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:45.589404       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581558       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.589472       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 23:06:45.582225       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582201       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581850       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581492       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581918       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582036       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582089       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.584119       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:45.580886       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582898       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.583169       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582358       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582519       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.678774       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.678800       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:06:45.678808       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:06:45.690232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c] <==
	I1210 23:06:43.722962       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:43.806606       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:43.907319       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:43.907355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 23:06:43.907463       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:43.958535       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:43.958725       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:06:43.966878       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:43.967951       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:06:43.967989       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:43.969353       1 config.go:200] "Starting service config controller"
	I1210 23:06:43.969424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:43.969498       1 config.go:309] "Starting node config controller"
	I1210 23:06:43.969504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:43.969511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:43.969865       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:43.969876       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:43.969893       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:43.969897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:44.070580       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:06:44.070951       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:44.070975       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef] <==
	I1210 23:06:40.268778       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:42.362918       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:42.363036       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:42.363052       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:42.363062       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:42.479112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 23:06:42.479151       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:42.502846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:42.502883       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:42.506069       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:42.506165       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:06:42.603062       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.587858     669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: E1210 23:06:42.605028     669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-852445\" already exists" pod="kube-system/kube-apiserver-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.606500     669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: E1210 23:06:42.630581     669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-852445\" already exists" pod="kube-system/kube-controller-manager-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644087     669 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644332     669 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644472     669 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.646868     669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.267910     669 apiserver.go:52] "Watching apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.281294     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-852445" containerName="etcd"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.281765     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.282058     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.282334     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.366626     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.369175     669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453884     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-xtables-lock\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453953     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-lib-modules\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453979     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-lib-modules\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.454066     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-cni-cfg\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.455209     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-xtables-lock\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:44 newest-cni-852445 kubelet[669]: E1210 23:06:44.303559     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:44 newest-cni-852445 kubelet[669]: E1210 23:06:44.495200     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-852445 -n newest-cni-852445: exit status 2 (338.288265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-852445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp: exit status 1 (66.17957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nlx4t" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6svw4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-tcglp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-852445
helpers_test.go:244: (dbg) docker inspect newest-cni-852445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	        "Created": "2025-12-10T23:06:06.670702652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300172,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:32.801683807Z",
	            "FinishedAt": "2025-12-10T23:06:31.88097771Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/hosts",
	        "LogPath": "/var/lib/docker/containers/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6/a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6-json.log",
	        "Name": "/newest-cni-852445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-852445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a578a88253cc81c91b15c940bd482b254492134bdd66c01d39a29421ccd3d8e6",
	                "LowerDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0143e9fd060ab130c2b62a8de1fbdebed5c4dfeed7a7c32a4b808cf1cbb7e6df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852445",
	                "Source": "/var/lib/docker/volumes/newest-cni-852445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852445",
	                "name.minikube.sigs.k8s.io": "newest-cni-852445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3b895c575692b357a1a30963511ee2fb77a07ef9b3a0db4e74f2c6e75af3d27e",
	            "SandboxKey": "/var/run/docker/netns/3b895c575692",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-852445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb4831c90c0ce0c1b95239c1b5316571db21d4f34ab86f2f57dcd68970eb2faf",
	                    "EndpointID": "40234ebd2b9cd58eb52a15a065b9f7da06a2a33e1df2085bccda05ce2f000a0e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "76:f4:61:ce:9e:b0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852445",
	                        "a578a88253cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445: exit status 2 (328.130826ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-852445 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-852445 logs -n 25: (1.371577234s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ old-k8s-version-280530 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p old-k8s-version-280530 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:06 UTC │
	│ image   │ no-preload-092439 image list --format=json                                                                                                                                                                                                           │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │ 10 Dec 25 23:05 UTC │
	│ pause   │ -p no-preload-092439 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:05 UTC │                     │
	│ delete  │ -p old-k8s-version-280530                                                                                                                                                                                                                            │ old-k8s-version-280530       │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-468067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ delete  │ -p no-preload-092439                                                                                                                                                                                                                                 │ no-preload-092439            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p embed-certs-468067 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-443884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-443884 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-468067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-852445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ stop    │ -p newest-cni-852445 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-852445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-443884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ start   │ -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	│ image   │ newest-cni-852445 image list --format=json                                                                                                                                                                                                           │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │ 10 Dec 25 23:06 UTC │
	│ pause   │ -p newest-cni-852445 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-852445            │ jenkins │ v1.37.0 │ 10 Dec 25 23:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:06:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:06:33.998397  300940 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:06:33.998736  300940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:33.998745  300940 out.go:374] Setting ErrFile to fd 2...
	I1210 23:06:33.998751  300940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:06:33.999066  300940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:06:33.999688  300940 out.go:368] Setting JSON to false
	I1210 23:06:34.001279  300940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2936,"bootTime":1765405058,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:06:34.001394  300940 start.go:143] virtualization: kvm guest
	I1210 23:06:34.006832  300940 out.go:179] * [default-k8s-diff-port-443884] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:06:34.008506  300940 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:06:34.008731  300940 notify.go:221] Checking for updates...
	I1210 23:06:34.011505  300940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:06:34.012883  300940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:34.014143  300940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:06:34.015147  300940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:06:34.016780  300940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:06:34.019612  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:34.020332  300940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:06:34.051068  300940 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:06:34.051183  300940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:34.128889  300940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-10 23:06:34.116849861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:34.129028  300940 docker.go:319] overlay module found
	I1210 23:06:34.132198  300940 out.go:179] * Using the docker driver based on existing profile
	I1210 23:06:34.133592  300940 start.go:309] selected driver: docker
	I1210 23:06:34.133609  300940 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-4438
84 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:34.133757  300940 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:06:34.134502  300940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:06:34.231966  300940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-10 23:06:34.216116371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:06:34.232308  300940 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:34.232344  300940 cni.go:84] Creating CNI manager for ""
	I1210 23:06:34.232422  300940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:34.232481  300940 start.go:353] cluster config:
	{Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:34.235348  300940 out.go:179] * Starting "default-k8s-diff-port-443884" primary control-plane node in "default-k8s-diff-port-443884" cluster
	I1210 23:06:34.236627  300940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:06:34.237856  300940 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:06:34.238883  300940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:34.238933  300940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:06:34.238933  300940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:06:34.238946  300940 cache.go:65] Caching tarball of preloaded images
	I1210 23:06:34.239069  300940 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:06:34.239095  300940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:06:34.239238  300940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:06:34.264707  300940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:06:34.264734  300940 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:06:34.264756  300940 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:06:34.264794  300940 start.go:360] acquireMachinesLock for default-k8s-diff-port-443884: {Name:mk4710330ecf7371e663f4e39eab0b9ebe0090d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:06:34.264878  300940 start.go:364] duration metric: took 46.267µs to acquireMachinesLock for "default-k8s-diff-port-443884"
	I1210 23:06:34.264904  300940 start.go:96] Skipping create...Using existing machine configuration
	I1210 23:06:34.264914  300940 fix.go:54] fixHost starting: 
	I1210 23:06:34.265201  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:34.289087  300940 fix.go:112] recreateIfNeeded on default-k8s-diff-port-443884: state=Stopped err=<nil>
	W1210 23:06:34.289137  300940 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 23:06:33.423510  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:33.922992  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:34.423193  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:34.923161  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:35.423484  291593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:06:35.504421  291593 kubeadm.go:1114] duration metric: took 5.164791479s to wait for elevateKubeSystemPrivileges
	I1210 23:06:35.504460  291593 kubeadm.go:403] duration metric: took 15.586958934s to StartCluster
	I1210 23:06:35.504480  291593 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:35.504552  291593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:35.506076  291593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:35.506578  291593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:06:35.506587  291593 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:35.506686  291593 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:35.506784  291593 addons.go:70] Setting storage-provisioner=true in profile "auto-177285"
	I1210 23:06:35.506810  291593 addons.go:239] Setting addon storage-provisioner=true in "auto-177285"
	I1210 23:06:35.506816  291593 addons.go:70] Setting default-storageclass=true in profile "auto-177285"
	I1210 23:06:35.506846  291593 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-177285"
	I1210 23:06:35.506854  291593 host.go:66] Checking if "auto-177285" exists ...
	I1210 23:06:35.506786  291593 config.go:182] Loaded profile config "auto-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:35.507283  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.507417  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.508993  291593 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:35.510323  291593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:35.530601  291593 addons.go:239] Setting addon default-storageclass=true in "auto-177285"
	I1210 23:06:35.530637  291593 host.go:66] Checking if "auto-177285" exists ...
	I1210 23:06:35.530985  291593 cli_runner.go:164] Run: docker container inspect auto-177285 --format={{.State.Status}}
	I1210 23:06:35.534850  291593 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:33.481611  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:33.481636  296906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:33.481707  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:06:33.513785  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.513877  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.513955  296906 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:33.513973  296906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:33.514036  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:06:33.553849  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:06:33.648218  296906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:33.661420  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:33.661442  296906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:33.663696  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:33.666687  296906 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:06:33.676898  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:33.681220  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:33.681245  296906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:33.703497  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:33.703671  296906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:33.732748  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:33.732777  296906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:33.753437  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:33.753459  296906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:33.771137  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:33.771160  296906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:33.789740  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:33.789763  296906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:33.807416  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:33.807441  296906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:33.825503  296906 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:33.825530  296906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:33.845553  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:34.976090  296906 node_ready.go:49] node "embed-certs-468067" is "Ready"
	I1210 23:06:34.976125  296906 node_ready.go:38] duration metric: took 1.309405149s for node "embed-certs-468067" to be "Ready" ...
	I1210 23:06:34.976141  296906 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:34.976199  296906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:35.581404  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917669318s)
	I1210 23:06:35.581496  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.735898772s)
	I1210 23:06:35.581771  296906 api_server.go:72] duration metric: took 2.137049672s to wait for apiserver process to appear ...
	I1210 23:06:35.581786  296906 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:35.581808  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:35.582339  296906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.903659584s)
	I1210 23:06:35.584247  296906 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-468067 addons enable metrics-server
	
	I1210 23:06:35.588061  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:35.588085  296906 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:35.601132  296906 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 23:06:35.602513  296906 addons.go:530] duration metric: took 2.157749868s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 23:06:35.537350  291593 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:35.537373  291593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:35.537454  291593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-177285
	I1210 23:06:35.558820  291593 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:35.558852  291593 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:35.558914  291593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-177285
	I1210 23:06:35.570170  291593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa Username:docker}
	I1210 23:06:35.585379  291593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/auto-177285/id_rsa Username:docker}
	I1210 23:06:35.617614  291593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:06:35.656684  291593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:35.693853  291593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:35.711281  291593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:35.835627  291593 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1210 23:06:35.837105  291593 node_ready.go:35] waiting up to 15m0s for node "auto-177285" to be "Ready" ...
	I1210 23:06:36.131614  291593 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:06:32.775527  299857 out.go:252] * Restarting existing docker container for "newest-cni-852445" ...
	I1210 23:06:32.775599  299857 cli_runner.go:164] Run: docker start newest-cni-852445
	I1210 23:06:33.068082  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:33.093693  299857 kic.go:430] container "newest-cni-852445" state is running.
	I1210 23:06:33.094145  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:33.119435  299857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/config.json ...
	I1210 23:06:33.119708  299857 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:33.119765  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:33.149402  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:33.149823  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:33.149861  299857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:33.150957  299857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45672->127.0.0.1:33104: read: connection reset by peer
	I1210 23:06:36.297894  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:36.297919  299857 ubuntu.go:182] provisioning hostname "newest-cni-852445"
	I1210 23:06:36.297971  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.316747  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:36.316975  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:36.316989  299857 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852445 && echo "newest-cni-852445" | sudo tee /etc/hostname
	I1210 23:06:36.468525  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852445
	
	I1210 23:06:36.468611  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.489200  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:36.489471  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:36.489507  299857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:06:36.629240  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:06:36.629269  299857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:06:36.629291  299857 ubuntu.go:190] setting up certificates
	I1210 23:06:36.629317  299857 provision.go:84] configureAuth start
	I1210 23:06:36.629376  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:36.652138  299857 provision.go:143] copyHostCerts
	I1210 23:06:36.652215  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:06:36.652229  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:06:36.652318  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:06:36.652462  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:06:36.652478  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:06:36.652522  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:06:36.652622  299857 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:06:36.652635  299857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:06:36.652704  299857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:06:36.652790  299857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852445]
	I1210 23:06:36.895604  299857 provision.go:177] copyRemoteCerts
	I1210 23:06:36.895667  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:06:36.895709  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.914011  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.011012  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:06:37.029088  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 23:06:37.046901  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:06:37.064657  299857 provision.go:87] duration metric: took 435.311642ms to configureAuth
	I1210 23:06:37.064687  299857 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:06:37.064898  299857 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:37.065009  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.085208  299857 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:37.085461  299857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 23:06:37.085486  299857 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:06:37.398288  299857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:06:37.398316  299857 machine.go:97] duration metric: took 4.278597254s to provisionDockerMachine
	I1210 23:06:37.398341  299857 start.go:293] postStartSetup for "newest-cni-852445" (driver="docker")
	I1210 23:06:37.398358  299857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:06:37.398438  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:06:37.398494  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.416961  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.512920  299857 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:06:37.516847  299857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:06:37.516875  299857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:06:37.516889  299857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:06:37.516951  299857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:06:37.517048  299857 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:06:37.517191  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:06:37.525142  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:37.543049  299857 start.go:296] duration metric: took 144.690522ms for postStartSetup
	I1210 23:06:37.543139  299857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:06:37.543189  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:36.132978  291593 addons.go:530] duration metric: took 626.290446ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:06:36.339624  291593 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-177285" context rescaled to 1 replicas
	W1210 23:06:37.840242  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	I1210 23:06:37.562251  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.655787  299857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:06:37.660178  299857 fix.go:56] duration metric: took 4.907194227s for fixHost
	I1210 23:06:37.660199  299857 start.go:83] releasing machines lock for "newest-cni-852445", held for 4.90723805s
	I1210 23:06:37.660250  299857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852445
	I1210 23:06:37.677781  299857 ssh_runner.go:195] Run: cat /version.json
	I1210 23:06:37.677837  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.677877  299857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:06:37.677948  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:37.696317  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.697840  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:37.848157  299857 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:37.854561  299857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:06:37.891859  299857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:06:37.897193  299857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:06:37.897267  299857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:06:37.905531  299857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:06:37.905561  299857 start.go:496] detecting cgroup driver to use...
	I1210 23:06:37.905593  299857 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:06:37.905640  299857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:06:37.920531  299857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:06:37.932874  299857 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:06:37.932931  299857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:06:37.950688  299857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:06:37.963712  299857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:06:38.047401  299857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:06:38.137777  299857 docker.go:234] disabling docker service ...
	I1210 23:06:38.137848  299857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:06:38.153421  299857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:06:38.166438  299857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:06:38.270774  299857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:06:38.362303  299857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:06:38.376136  299857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:06:38.392159  299857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:06:38.392215  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.402813  299857 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:06:38.402883  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.419583  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.430604  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.439927  299857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:06:38.451796  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.461382  299857 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.471234  299857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:38.480950  299857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:06:38.489106  299857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:06:38.497968  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:38.589223  299857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:06:38.724489  299857 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:06:38.724549  299857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:06:38.728591  299857 start.go:564] Will wait 60s for crictl version
	I1210 23:06:38.728677  299857 ssh_runner.go:195] Run: which crictl
	I1210 23:06:38.732583  299857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:06:38.759011  299857 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:06:38.759092  299857 ssh_runner.go:195] Run: crio --version
	I1210 23:06:38.791727  299857 ssh_runner.go:195] Run: crio --version
	I1210 23:06:38.822752  299857 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 23:06:38.824759  299857 cli_runner.go:164] Run: docker network inspect newest-cni-852445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:38.845157  299857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 23:06:38.850199  299857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:38.863435  299857 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 23:06:34.294290  300940 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-443884" ...
	I1210 23:06:34.294396  300940 cli_runner.go:164] Run: docker start default-k8s-diff-port-443884
	I1210 23:06:34.649842  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:34.672994  300940 kic.go:430] container "default-k8s-diff-port-443884" state is running.
	I1210 23:06:34.673891  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:34.699059  300940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/config.json ...
	I1210 23:06:34.699337  300940 machine.go:94] provisionDockerMachine start ...
	I1210 23:06:34.699413  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:34.734965  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:34.735279  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:34.735295  300940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:06:34.735888  300940 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57772->127.0.0.1:33109: read: connection reset by peer
	I1210 23:06:37.871279  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:06:37.871301  300940 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-443884"
	I1210 23:06:37.871359  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:37.891431  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:37.891751  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:37.891777  300940 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-443884 && echo "default-k8s-diff-port-443884" | sudo tee /etc/hostname
	I1210 23:06:38.040404  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-443884
	
	I1210 23:06:38.040539  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.060025  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:38.060271  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:38.060297  300940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-443884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-443884/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-443884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:06:38.207950  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:06:38.207983  300940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:06:38.208029  300940 ubuntu.go:190] setting up certificates
	I1210 23:06:38.208053  300940 provision.go:84] configureAuth start
	I1210 23:06:38.208185  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:38.231137  300940 provision.go:143] copyHostCerts
	I1210 23:06:38.231222  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:06:38.231245  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:06:38.231315  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:06:38.231434  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:06:38.231446  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:06:38.231477  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:06:38.231547  300940 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:06:38.231558  300940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:06:38.231583  300940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:06:38.231659  300940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-443884 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-443884 localhost minikube]
	I1210 23:06:38.317400  300940 provision.go:177] copyRemoteCerts
	I1210 23:06:38.317453  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:06:38.317485  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.335820  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:38.434893  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:06:38.456809  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 23:06:38.476483  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 23:06:38.495869  300940 provision.go:87] duration metric: took 287.784765ms to configureAuth
	I1210 23:06:38.495899  300940 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:06:38.496123  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:38.496253  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.515948  300940 main.go:143] libmachine: Using SSH client type: native
	I1210 23:06:38.516170  300940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 23:06:38.516183  300940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:06:38.845931  300940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:06:38.845959  300940 machine.go:97] duration metric: took 4.146605033s to provisionDockerMachine
	I1210 23:06:38.845974  300940 start.go:293] postStartSetup for "default-k8s-diff-port-443884" (driver="docker")
	I1210 23:06:38.845987  300940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:06:38.846060  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:06:38.846115  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:38.866867  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:38.970298  300940 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:06:38.973902  300940 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:06:38.973932  300940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:06:38.973946  300940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:06:38.973994  300940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:06:38.974092  300940 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:06:38.974213  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:06:38.982188  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:38.864740  299857 kubeadm.go:884] updating cluster {Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:06:38.864907  299857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:06:38.864962  299857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:38.900681  299857 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:38.900707  299857 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:06:38.900763  299857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:38.938500  299857 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:38.938525  299857 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:06:38.938534  299857 kubeadm.go:935] updating node { 192.168.85.2  8443 v1.35.0-beta.0 crio true true} ...
	I1210 23:06:38.938668  299857 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:06:38.938752  299857 ssh_runner.go:195] Run: crio config
	I1210 23:06:38.987572  299857 cni.go:84] Creating CNI manager for ""
	I1210 23:06:38.987604  299857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:38.987629  299857 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 23:06:38.987672  299857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852445 NodeName:newest-cni-852445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:06:38.987836  299857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852445"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:06:38.987933  299857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 23:06:38.997531  299857 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:06:38.997599  299857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:06:39.006142  299857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 23:06:39.021418  299857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 23:06:39.036980  299857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 23:06:39.051203  299857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:06:39.055136  299857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:39.065232  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:39.157189  299857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:39.184579  299857 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445 for IP: 192.168.85.2
	I1210 23:06:39.184602  299857 certs.go:195] generating shared ca certs ...
	I1210 23:06:39.184621  299857 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:39.184814  299857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:06:39.184910  299857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:06:39.184928  299857 certs.go:257] generating profile certs ...
	I1210 23:06:39.185032  299857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/client.key
	I1210 23:06:39.185095  299857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.key.948cca2b
	I1210 23:06:39.185149  299857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.key
	I1210 23:06:39.185272  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:06:39.185302  299857 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:06:39.185311  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:06:39.185337  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:06:39.185361  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:06:39.185393  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:06:39.185443  299857 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:39.186533  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:06:39.207718  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:06:39.230169  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:06:39.255946  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:06:39.285516  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 23:06:39.305423  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:06:39.323436  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:06:39.344198  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/newest-cni-852445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:06:39.363470  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:06:39.384229  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:06:39.407810  299857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:06:39.425886  299857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:06:39.440443  299857 ssh_runner.go:195] Run: openssl version
	I1210 23:06:39.447086  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.454596  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:06:39.463311  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.468939  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.468999  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:06:39.511266  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:06:39.520211  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.528197  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:06:39.536388  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.540991  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.541054  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:06:39.600263  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:06:39.610730  299857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.620280  299857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:06:39.631063  299857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.635896  299857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.635958  299857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:39.692353  299857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:06:39.703873  299857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:06:39.711510  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:06:39.773521  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:06:39.834749  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:06:39.912533  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:06:39.974089  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:06:40.033417  299857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:06:40.087764  299857 kubeadm.go:401] StartCluster: {Name:newest-cni-852445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-852445 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:40.087880  299857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:06:40.087934  299857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:06:40.141361  299857 cri.go:89] found id: "03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c"
	I1210 23:06:40.141390  299857 cri.go:89] found id: "d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef"
	I1210 23:06:40.141396  299857 cri.go:89] found id: "e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea"
	I1210 23:06:40.141401  299857 cri.go:89] found id: "3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189"
	I1210 23:06:40.141405  299857 cri.go:89] found id: ""
	I1210 23:06:40.141460  299857 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:06:40.157816  299857 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:40Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:40.157913  299857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:06:40.170134  299857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:06:40.170159  299857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:06:40.170206  299857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:06:40.181214  299857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:06:40.182332  299857 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852445" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:40.183223  299857 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-5100/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852445" cluster setting kubeconfig missing "newest-cni-852445" context setting]
	I1210 23:06:40.184334  299857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.186527  299857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:06:40.197522  299857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 23:06:40.197554  299857 kubeadm.go:602] duration metric: took 27.389565ms to restartPrimaryControlPlane
	I1210 23:06:40.197565  299857 kubeadm.go:403] duration metric: took 109.811149ms to StartCluster
	I1210 23:06:40.197582  299857 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.197640  299857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:40.200098  299857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.200360  299857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:40.200517  299857 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:40.200603  299857 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-852445"
	I1210 23:06:40.200619  299857 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-852445"
	W1210 23:06:40.200626  299857 addons.go:248] addon storage-provisioner should already be in state true
	I1210 23:06:40.200665  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.200709  299857 config.go:182] Loaded profile config "newest-cni-852445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:06:40.200754  299857 addons.go:70] Setting dashboard=true in profile "newest-cni-852445"
	I1210 23:06:40.200764  299857 addons.go:239] Setting addon dashboard=true in "newest-cni-852445"
	W1210 23:06:40.200771  299857 addons.go:248] addon dashboard should already be in state true
	I1210 23:06:40.200788  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.201151  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.201243  299857 addons.go:70] Setting default-storageclass=true in profile "newest-cni-852445"
	I1210 23:06:40.201278  299857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852445"
	I1210 23:06:40.201603  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.201772  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.207388  299857 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:40.210195  299857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.236267  299857 addons.go:239] Setting addon default-storageclass=true in "newest-cni-852445"
	W1210 23:06:40.236409  299857 addons.go:248] addon default-storageclass should already be in state true
	I1210 23:06:40.236464  299857 host.go:66] Checking if "newest-cni-852445" exists ...
	I1210 23:06:40.237341  299857 cli_runner.go:164] Run: docker container inspect newest-cni-852445 --format={{.State.Status}}
	I1210 23:06:40.243205  299857 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 23:06:40.243208  299857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:40.244759  299857 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 23:06:39.001837  300940 start.go:296] duration metric: took 155.849393ms for postStartSetup
	I1210 23:06:39.001913  300940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:06:39.001967  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.021575  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.118660  300940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:06:39.123812  300940 fix.go:56] duration metric: took 4.858891888s for fixHost
	I1210 23:06:39.123840  300940 start.go:83] releasing machines lock for "default-k8s-diff-port-443884", held for 4.858948233s
	I1210 23:06:39.123908  300940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-443884
	I1210 23:06:39.143547  300940 ssh_runner.go:195] Run: cat /version.json
	I1210 23:06:39.143608  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.143616  300940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:06:39.143736  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:39.163406  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.164354  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:39.258513  300940 ssh_runner.go:195] Run: systemctl --version
	I1210 23:06:39.326555  300940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:06:39.366148  300940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:06:39.371076  300940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:06:39.371138  300940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:06:39.380621  300940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 23:06:39.380660  300940 start.go:496] detecting cgroup driver to use...
	I1210 23:06:39.380695  300940 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:06:39.380741  300940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:06:39.400236  300940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:06:39.413878  300940 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:06:39.413933  300940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:06:39.429873  300940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:06:39.444420  300940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:06:39.528197  300940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:06:39.644910  300940 docker.go:234] disabling docker service ...
	I1210 23:06:39.644973  300940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:06:39.665196  300940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:06:39.683257  300940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:06:39.805982  300940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:06:39.937108  300940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:06:39.956999  300940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:06:39.975619  300940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:06:39.975707  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:39.989238  300940 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:06:39.989306  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.001456  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.014158  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.026751  300940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:06:40.038413  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.050725  300940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.062566  300940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:06:40.075811  300940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:06:40.086724  300940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:06:40.097719  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.227325  300940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:06:40.452547  300940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:06:40.452612  300940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:06:40.457899  300940 start.go:564] Will wait 60s for crictl version
	I1210 23:06:40.457961  300940 ssh_runner.go:195] Run: which crictl
	I1210 23:06:40.462915  300940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:06:40.498177  300940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:06:40.498313  300940 ssh_runner.go:195] Run: crio --version
	I1210 23:06:40.535783  300940 ssh_runner.go:195] Run: crio --version
	I1210 23:06:40.575936  300940 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:06:36.082509  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:36.087836  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:36.087867  296906 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:36.582371  296906 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 23:06:36.586521  296906 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 23:06:36.587479  296906 api_server.go:141] control plane version: v1.34.2
	I1210 23:06:36.587504  296906 api_server.go:131] duration metric: took 1.005711086s to wait for apiserver health ...
	I1210 23:06:36.587512  296906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:36.591359  296906 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:36.591389  296906 system_pods.go:61] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:36.591396  296906 system_pods.go:61] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:36.591403  296906 system_pods.go:61] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:36.591409  296906 system_pods.go:61] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:36.591416  296906 system_pods.go:61] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:36.591422  296906 system_pods.go:61] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:36.591430  296906 system_pods.go:61] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:36.591435  296906 system_pods.go:61] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:36.591459  296906 system_pods.go:74] duration metric: took 3.941041ms to wait for pod list to return data ...
	I1210 23:06:36.591467  296906 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:36.594028  296906 default_sa.go:45] found service account: "default"
	I1210 23:06:36.594045  296906 default_sa.go:55] duration metric: took 2.5739ms for default service account to be created ...
	I1210 23:06:36.594053  296906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:06:36.596909  296906 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:36.596934  296906 system_pods.go:89] "coredns-66bc5c9577-qw48c" [9d3a4070-1f4d-4958-8748-0d5c00f296ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:36.596941  296906 system_pods.go:89] "etcd-embed-certs-468067" [3c656ac4-5d01-48fc-9019-2c903c52892f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:36.596953  296906 system_pods.go:89] "kindnet-dkdlj" [0837f94b-4c23-4d59-9718-dcf9b2f5a276] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:36.596961  296906 system_pods.go:89] "kube-apiserver-embed-certs-468067" [7cfa0477-91bc-4165-a92c-7492c5c632fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:36.596970  296906 system_pods.go:89] "kube-controller-manager-embed-certs-468067" [6fa93dee-d988-49a8-ac7c-45b8e5dc52ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:36.596977  296906 system_pods.go:89] "kube-proxy-27pft" [a31d4ae8-642f-4d74-9bf7-726ec7a2dacb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:36.596982  296906 system_pods.go:89] "kube-scheduler-embed-certs-468067" [9039a720-77c3-49fa-9edd-f3c6d7e98fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:36.596988  296906 system_pods.go:89] "storage-provisioner" [cba94e39-8a92-4cf5-a616-80857c063c22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:36.596994  296906 system_pods.go:126] duration metric: took 2.936216ms to wait for k8s-apps to be running ...
	I1210 23:06:36.597004  296906 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:06:36.597041  296906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:36.610319  296906 system_svc.go:56] duration metric: took 13.303884ms WaitForService to wait for kubelet
	I1210 23:06:36.610352  296906 kubeadm.go:587] duration metric: took 3.165630309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:36.610376  296906 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:36.613420  296906 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:36.613446  296906 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:36.613462  296906 node_conditions.go:105] duration metric: took 3.081579ms to run NodePressure ...
	I1210 23:06:36.613472  296906 start.go:242] waiting for startup goroutines ...
	I1210 23:06:36.613479  296906 start.go:247] waiting for cluster config update ...
	I1210 23:06:36.613491  296906 start.go:256] writing updated cluster config ...
	I1210 23:06:36.613775  296906 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:36.617511  296906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:36.620922  296906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qw48c" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 23:06:38.626894  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	W1210 23:06:40.628411  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	I1210 23:06:40.577923  300940 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-443884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:06:40.600795  300940 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 23:06:40.605409  300940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:40.617631  300940 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:06:40.617820  300940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:06:40.617894  300940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:40.659901  300940 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:40.659927  300940 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:06:40.659982  300940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:06:40.691790  300940 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:06:40.691815  300940 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:06:40.691825  300940 kubeadm.go:935] updating node { 192.168.76.2  8444 v1.34.2 crio true true} ...
	I1210 23:06:40.691997  300940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-443884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:06:40.692098  300940 ssh_runner.go:195] Run: crio config
	I1210 23:06:40.752609  300940 cni.go:84] Creating CNI manager for ""
	I1210 23:06:40.752650  300940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:06:40.752670  300940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:06:40.752702  300940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-443884 NodeName:default-k8s-diff-port-443884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:06:40.752850  300940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-443884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:06:40.752925  300940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:06:40.763697  300940 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:06:40.763757  300940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:06:40.774128  300940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 23:06:40.791063  300940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:06:40.807363  300940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 23:06:40.823727  300940 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:06:40.828562  300940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:06:40.841823  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:40.954208  300940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:40.980761  300940 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884 for IP: 192.168.76.2
	I1210 23:06:40.980786  300940 certs.go:195] generating shared ca certs ...
	I1210 23:06:40.980830  300940 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:40.981045  300940 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:06:40.981136  300940 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:06:40.981152  300940 certs.go:257] generating profile certs ...
	I1210 23:06:40.981255  300940 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/client.key
	I1210 23:06:40.981338  300940 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key.03b95e94
	I1210 23:06:40.981388  300940 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key
	I1210 23:06:40.981522  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:06:40.981557  300940 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:06:40.981566  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:06:40.981598  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:06:40.981627  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:06:40.981688  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:06:40.981745  300940 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:06:40.982579  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:06:41.006398  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:06:41.029138  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:06:41.055979  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:06:41.090931  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 23:06:41.115519  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:06:41.143539  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:06:41.167967  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/default-k8s-diff-port-443884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:06:41.191741  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:06:41.216094  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:06:41.238887  300940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:06:41.262280  300940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:06:41.279468  300940 ssh_runner.go:195] Run: openssl version
	I1210 23:06:41.288146  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.298581  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:06:41.309025  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.314246  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.314310  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:06:41.372078  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:06:41.383531  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.394581  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:06:41.404966  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.410479  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.410543  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:06:41.469741  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:06:41.480412  300940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.491482  300940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:06:41.502157  300940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.507486  300940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.507545  300940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:06:41.566047  300940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:06:41.576696  300940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:06:41.582841  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:06:41.638345  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:06:41.706395  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:06:41.769498  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:06:41.833360  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:06:41.892236  300940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:06:41.954681  300940 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-443884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-443884 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:06:41.954804  300940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:06:41.954887  300940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:06:42.001767  300940 cri.go:89] found id: "26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1"
	I1210 23:06:42.001791  300940 cri.go:89] found id: "42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865"
	I1210 23:06:42.001797  300940 cri.go:89] found id: "ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2"
	I1210 23:06:42.001801  300940 cri.go:89] found id: "2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f"
	I1210 23:06:42.001806  300940 cri.go:89] found id: ""
	I1210 23:06:42.001849  300940 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 23:06:42.030546  300940 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:06:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:06:42.030716  300940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:06:42.044302  300940 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:06:42.044327  300940 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:06:42.044379  300940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:06:42.055126  300940 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:06:42.056570  300940 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-443884" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:42.057658  300940 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-5100/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-443884" cluster setting kubeconfig missing "default-k8s-diff-port-443884" context setting]
	I1210 23:06:42.059194  300940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.061908  300940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:06:42.079840  300940 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 23:06:42.079951  300940 kubeadm.go:602] duration metric: took 35.615134ms to restartPrimaryControlPlane
	I1210 23:06:42.079995  300940 kubeadm.go:403] duration metric: took 125.323825ms to StartCluster
	I1210 23:06:42.080047  300940 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.080161  300940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:06:42.084071  300940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:06:42.084536  300940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 IPv6: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:06:42.084601  300940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:06:42.085200  300940 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085295  300940 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.085329  300940 addons.go:248] addon storage-provisioner should already be in state true
	I1210 23:06:42.085248  300940 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085473  300940 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.085481  300940 addons.go:248] addon dashboard should already be in state true
	I1210 23:06:42.085508  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.084796  300940 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:06:42.085258  300940 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-443884"
	I1210 23:06:42.085942  300940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-443884"
	I1210 23:06:42.086277  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.086518  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.087242  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.086831  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.089972  300940 out.go:179] * Verifying Kubernetes components...
	I1210 23:06:42.092776  300940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:06:42.123466  300940 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-443884"
	W1210 23:06:42.123493  300940 addons.go:248] addon default-storageclass should already be in state true
	I1210 23:06:42.123523  300940 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:06:42.123993  300940 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:06:42.129156  300940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:06:42.130633  300940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:42.130666  300940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:42.130734  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.136811  300940 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 23:06:42.138347  300940 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 23:06:40.245565  299857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:40.245632  299857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:06:40.245725  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.248387  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:40.248407  299857 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:40.248696  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.277555  299857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:40.277578  299857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:40.277635  299857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852445
	I1210 23:06:40.287112  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.288514  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.308721  299857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/newest-cni-852445/id_rsa Username:docker}
	I1210 23:06:40.378056  299857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:40.396028  299857 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:40.396108  299857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:40.414620  299857 api_server.go:72] duration metric: took 214.224289ms to wait for apiserver process to appear ...
	I1210 23:06:40.414665  299857 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:40.414688  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:40.419399  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:40.419428  299857 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:40.419623  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:40.432696  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:40.440049  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:40.440072  299857 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:40.458369  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:40.458393  299857 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:40.478235  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:40.478262  299857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:40.498123  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:40.498157  299857 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:40.515237  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:40.515265  299857 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:40.531579  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:40.531607  299857 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:40.548879  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:40.548906  299857 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:40.566434  299857 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:40.566460  299857 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:40.584021  299857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:42.336809  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.336847  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.336865  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.388168  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.388276  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.415493  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.433716  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:06:42.433746  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:06:42.915189  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:42.928263  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:42.928320  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:43.299625  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.879943798s)
	I1210 23:06:43.299636  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.866915243s)
	I1210 23:06:43.299819  299857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.715756541s)
	I1210 23:06:43.302089  299857 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852445 addons enable metrics-server
	
	I1210 23:06:43.317451  299857 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 23:06:39.841062  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	W1210 23:06:41.841704  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	I1210 23:06:42.139758  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 23:06:42.139817  300940 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 23:06:42.139973  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.162604  300940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:42.162732  300940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:06:42.162828  300940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:06:42.165111  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.188810  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.194934  300940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:06:42.328003  300940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:06:42.349344  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:06:42.375531  300940 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:06:42.387691  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 23:06:42.387829  300940 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 23:06:42.428474  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 23:06:42.428506  300940 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 23:06:42.467773  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:06:42.484598  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 23:06:42.484680  300940 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 23:06:42.554689  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 23:06:42.554711  300940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 23:06:42.588602  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 23:06:42.588652  300940 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 23:06:42.619127  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 23:06:42.619149  300940 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 23:06:42.649837  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 23:06:42.649862  300940 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 23:06:42.674252  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 23:06:42.674281  300940 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 23:06:42.694637  300940 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:42.694689  300940 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 23:06:42.723045  300940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 23:06:43.319077  299857 addons.go:530] duration metric: took 3.118537165s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 23:06:43.415395  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:43.420773  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:43.420800  299857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:43.915094  299857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:06:43.924140  299857 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:06:43.925414  299857 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 23:06:43.925441  299857 api_server.go:131] duration metric: took 3.510768581s to wait for apiserver health ...
	I1210 23:06:43.925468  299857 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:43.932186  299857 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:43.932377  299857 system_pods.go:61] "coredns-7d764666f9-nlx4t" [2f260fe5-0362-419b-9fa7-b773b56a74f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:43.932426  299857 system_pods.go:61] "etcd-newest-cni-852445" [09281ba7-a26f-4bfc-b2ec-81fc85f323e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:43.932468  299857 system_pods.go:61] "kindnet-qnlhj" [6573bdb3-e42a-41f9-b284-370c54e28aec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:43.932505  299857 system_pods.go:61] "kube-apiserver-newest-cni-852445" [22610c50-364e-4ad1-b58d-a7a410acad6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:43.932537  299857 system_pods.go:61] "kube-controller-manager-newest-cni-852445" [1fea0a39-fcaa-43aa-9d98-c5c85bf53fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:43.932555  299857 system_pods.go:61] "kube-proxy-b8hgz" [28018116-263f-4460-bef3-54ee0930fde9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:43.932604  299857 system_pods.go:61] "kube-scheduler-newest-cni-852445" [a16c64c2-4c89-4989-9327-827fa77eff6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:43.932623  299857 system_pods.go:61] "storage-provisioner" [4a2e7f71-19fc-4f51-a7ae-a9a487663a80] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 23:06:43.932640  299857 system_pods.go:74] duration metric: took 7.164458ms to wait for pod list to return data ...
	I1210 23:06:43.932679  299857 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:43.939486  299857 default_sa.go:45] found service account: "default"
	I1210 23:06:43.939513  299857 default_sa.go:55] duration metric: took 6.82645ms for default service account to be created ...
	I1210 23:06:43.939529  299857 kubeadm.go:587] duration metric: took 3.739138349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 23:06:43.939549  299857 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:43.943435  299857 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:43.943526  299857 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:43.943563  299857 node_conditions.go:105] duration metric: took 4.008818ms to run NodePressure ...
	I1210 23:06:43.943589  299857 start.go:242] waiting for startup goroutines ...
	I1210 23:06:43.943610  299857 start.go:247] waiting for cluster config update ...
	I1210 23:06:43.943634  299857 start.go:256] writing updated cluster config ...
	I1210 23:06:43.943956  299857 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:44.018002  299857 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 23:06:44.020960  299857 out.go:179] * Done! kubectl is now configured to use "newest-cni-852445" cluster and "default" namespace by default
	I1210 23:06:44.343013  300940 node_ready.go:49] node "default-k8s-diff-port-443884" is "Ready"
	I1210 23:06:44.343051  300940 node_ready.go:38] duration metric: took 1.967467282s for node "default-k8s-diff-port-443884" to be "Ready" ...
	I1210 23:06:44.343067  300940 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:44.343132  300940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:45.153751  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.685940335s)
	I1210 23:06:45.154107  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.431020687s)
	I1210 23:06:45.154401  300940 api_server.go:72] duration metric: took 3.069517223s to wait for apiserver process to appear ...
	I1210 23:06:45.154417  300940 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:45.154436  300940 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 23:06:45.154730  300940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.805349781s)
	I1210 23:06:45.156612  300940 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-443884 addons enable metrics-server
	
	I1210 23:06:45.161624  300940 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:06:45.161664  300940 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:06:45.166144  300940 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 23:06:42.636686  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	W1210 23:06:45.128721  296906 pod_ready.go:104] pod "coredns-66bc5c9577-qw48c" is not "Ready", error: <nil>
	W1210 23:06:43.841961  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	W1210 23:06:46.340736  291593 node_ready.go:57] node "auto-177285" has "Ready":"False" status (will retry)
	I1210 23:06:47.341235  291593 node_ready.go:49] node "auto-177285" is "Ready"
	I1210 23:06:47.341268  291593 node_ready.go:38] duration metric: took 11.504132205s for node "auto-177285" to be "Ready" ...
	I1210 23:06:47.341284  291593 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:06:47.341349  291593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:06:47.356786  291593 api_server.go:72] duration metric: took 11.850167974s to wait for apiserver process to appear ...
	I1210 23:06:47.356810  291593 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:06:47.356831  291593 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 23:06:47.361870  291593 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1210 23:06:47.362934  291593 api_server.go:141] control plane version: v1.34.2
	I1210 23:06:47.362965  291593 api_server.go:131] duration metric: took 6.146963ms to wait for apiserver health ...
	I1210 23:06:47.362975  291593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:47.367240  291593 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:47.367281  291593 system_pods.go:61] "coredns-66bc5c9577-lvm7h" [f8480134-35d3-461b-8f23-c9ab48464a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:47.367292  291593 system_pods.go:61] "etcd-auto-177285" [a3057a4a-30ed-42e6-81ce-494cc57dc55a] Running
	I1210 23:06:47.367301  291593 system_pods.go:61] "kindnet-58qnk" [51db2d7e-beb3-4cf9-822f-ad006905848c] Running
	I1210 23:06:47.367306  291593 system_pods.go:61] "kube-apiserver-auto-177285" [38af29a0-ca84-401d-a8ce-930471e28234] Running
	I1210 23:06:47.367327  291593 system_pods.go:61] "kube-controller-manager-auto-177285" [cfe2981e-2d97-4622-88bb-0ad2f8e491f6] Running
	I1210 23:06:47.367337  291593 system_pods.go:61] "kube-proxy-hr56m" [31079b1b-7a82-404c-bddd-7855ffcaf328] Running
	I1210 23:06:47.367354  291593 system_pods.go:61] "kube-scheduler-auto-177285" [a2557f3f-1039-4c9f-a602-1214bdd635fb] Running
	I1210 23:06:47.367367  291593 system_pods.go:61] "storage-provisioner" [c12b0400-f568-434b-9aea-fd0745a45f0c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:47.367375  291593 system_pods.go:74] duration metric: took 4.392685ms to wait for pod list to return data ...
	I1210 23:06:47.367384  291593 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:47.369911  291593 default_sa.go:45] found service account: "default"
	I1210 23:06:47.369933  291593 default_sa.go:55] duration metric: took 2.542832ms for default service account to be created ...
	I1210 23:06:47.369942  291593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:06:47.373190  291593 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:47.373219  291593 system_pods.go:89] "coredns-66bc5c9577-lvm7h" [f8480134-35d3-461b-8f23-c9ab48464a08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:47.373228  291593 system_pods.go:89] "etcd-auto-177285" [a3057a4a-30ed-42e6-81ce-494cc57dc55a] Running
	I1210 23:06:47.373236  291593 system_pods.go:89] "kindnet-58qnk" [51db2d7e-beb3-4cf9-822f-ad006905848c] Running
	I1210 23:06:47.373242  291593 system_pods.go:89] "kube-apiserver-auto-177285" [38af29a0-ca84-401d-a8ce-930471e28234] Running
	I1210 23:06:47.373248  291593 system_pods.go:89] "kube-controller-manager-auto-177285" [cfe2981e-2d97-4622-88bb-0ad2f8e491f6] Running
	I1210 23:06:47.373263  291593 system_pods.go:89] "kube-proxy-hr56m" [31079b1b-7a82-404c-bddd-7855ffcaf328] Running
	I1210 23:06:47.373268  291593 system_pods.go:89] "kube-scheduler-auto-177285" [a2557f3f-1039-4c9f-a602-1214bdd635fb] Running
	I1210 23:06:47.373275  291593 system_pods.go:89] "storage-provisioner" [c12b0400-f568-434b-9aea-fd0745a45f0c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:47.373299  291593 retry.go:31] will retry after 303.811995ms: missing components: kube-dns
	I1210 23:06:47.683697  291593 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:47.683733  291593 system_pods.go:89] "coredns-66bc5c9577-lvm7h" [f8480134-35d3-461b-8f23-c9ab48464a08] Running
	I1210 23:06:47.683748  291593 system_pods.go:89] "etcd-auto-177285" [a3057a4a-30ed-42e6-81ce-494cc57dc55a] Running
	I1210 23:06:47.683754  291593 system_pods.go:89] "kindnet-58qnk" [51db2d7e-beb3-4cf9-822f-ad006905848c] Running
	I1210 23:06:47.683765  291593 system_pods.go:89] "kube-apiserver-auto-177285" [38af29a0-ca84-401d-a8ce-930471e28234] Running
	I1210 23:06:47.683769  291593 system_pods.go:89] "kube-controller-manager-auto-177285" [cfe2981e-2d97-4622-88bb-0ad2f8e491f6] Running
	I1210 23:06:47.683774  291593 system_pods.go:89] "kube-proxy-hr56m" [31079b1b-7a82-404c-bddd-7855ffcaf328] Running
	I1210 23:06:47.683779  291593 system_pods.go:89] "kube-scheduler-auto-177285" [a2557f3f-1039-4c9f-a602-1214bdd635fb] Running
	I1210 23:06:47.683783  291593 system_pods.go:89] "storage-provisioner" [c12b0400-f568-434b-9aea-fd0745a45f0c] Running
	I1210 23:06:47.683792  291593 system_pods.go:126] duration metric: took 313.84309ms to wait for k8s-apps to be running ...
	I1210 23:06:47.683806  291593 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:06:47.683856  291593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:47.705272  291593 system_svc.go:56] duration metric: took 21.454263ms WaitForService to wait for kubelet
	I1210 23:06:47.705302  291593 kubeadm.go:587] duration metric: took 12.198687097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:47.705412  291593 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:47.708860  291593 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:47.708890  291593 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:47.708910  291593 node_conditions.go:105] duration metric: took 3.491631ms to run NodePressure ...
	I1210 23:06:47.708924  291593 start.go:242] waiting for startup goroutines ...
	I1210 23:06:47.708935  291593 start.go:247] waiting for cluster config update ...
	I1210 23:06:47.708953  291593 start.go:256] writing updated cluster config ...
	I1210 23:06:47.709514  291593 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:47.715192  291593 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:47.721113  291593 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lvm7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.725954  291593 pod_ready.go:94] pod "coredns-66bc5c9577-lvm7h" is "Ready"
	I1210 23:06:47.725980  291593 pod_ready.go:86] duration metric: took 4.842488ms for pod "coredns-66bc5c9577-lvm7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.728338  291593 pod_ready.go:83] waiting for pod "etcd-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.732402  291593 pod_ready.go:94] pod "etcd-auto-177285" is "Ready"
	I1210 23:06:47.732419  291593 pod_ready.go:86] duration metric: took 4.064401ms for pod "etcd-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.734343  291593 pod_ready.go:83] waiting for pod "kube-apiserver-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.738238  291593 pod_ready.go:94] pod "kube-apiserver-auto-177285" is "Ready"
	I1210 23:06:47.738255  291593 pod_ready.go:86] duration metric: took 3.893467ms for pod "kube-apiserver-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:47.740250  291593 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:48.120714  291593 pod_ready.go:94] pod "kube-controller-manager-auto-177285" is "Ready"
	I1210 23:06:48.120741  291593 pod_ready.go:86] duration metric: took 380.470337ms for pod "kube-controller-manager-auto-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:48.321263  291593 pod_ready.go:83] waiting for pod "kube-proxy-hr56m" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:06:45.167876  300940 addons.go:530] duration metric: took 3.083284447s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 23:06:45.655428  300940 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 23:06:45.660546  300940 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1210 23:06:45.661720  300940 api_server.go:141] control plane version: v1.34.2
	I1210 23:06:45.661743  300940 api_server.go:131] duration metric: took 507.319999ms to wait for apiserver health ...
	I1210 23:06:45.661752  300940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:06:45.665155  300940 system_pods.go:59] 8 kube-system pods found
	I1210 23:06:45.665190  300940 system_pods.go:61] "coredns-66bc5c9577-s8zsm" [24faae58-d6c6-42ad-93d3-3d160895982e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:45.665202  300940 system_pods.go:61] "etcd-default-k8s-diff-port-443884" [306255e6-2652-4217-ade8-a96f119869f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:45.665216  300940 system_pods.go:61] "kindnet-wtcv9" [d5d31b10-60af-4ff4-bb38-44edc65ef3d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:45.665226  300940 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-443884" [4fb15273-fe29-41cc-9e81-99448e6f455a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:45.665238  300940 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-443884" [df38c0f6-f94b-404f-b33c-c6c522b7a29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:45.665245  300940 system_pods.go:61] "kube-proxy-lwnhd" [fcf815a4-e235-459b-b10a-31761cb8ad21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:45.665253  300940 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-443884" [7bb103ce-e5ca-49af-948f-735d76edbdd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:45.665264  300940 system_pods.go:61] "storage-provisioner" [81e22dd7-170e-4dfb-abf8-96dde77438ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:45.665275  300940 system_pods.go:74] duration metric: took 3.516675ms to wait for pod list to return data ...
	I1210 23:06:45.665290  300940 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:06:45.667855  300940 default_sa.go:45] found service account: "default"
	I1210 23:06:45.667879  300940 default_sa.go:55] duration metric: took 2.579846ms for default service account to be created ...
	I1210 23:06:45.667889  300940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:06:45.670997  300940 system_pods.go:86] 8 kube-system pods found
	I1210 23:06:45.671029  300940 system_pods.go:89] "coredns-66bc5c9577-s8zsm" [24faae58-d6c6-42ad-93d3-3d160895982e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:06:45.671041  300940 system_pods.go:89] "etcd-default-k8s-diff-port-443884" [306255e6-2652-4217-ade8-a96f119869f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:06:45.671052  300940 system_pods.go:89] "kindnet-wtcv9" [d5d31b10-60af-4ff4-bb38-44edc65ef3d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 23:06:45.671061  300940 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-443884" [4fb15273-fe29-41cc-9e81-99448e6f455a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:06:45.671082  300940 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-443884" [df38c0f6-f94b-404f-b33c-c6c522b7a29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:06:45.671094  300940 system_pods.go:89] "kube-proxy-lwnhd" [fcf815a4-e235-459b-b10a-31761cb8ad21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:06:45.671102  300940 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-443884" [7bb103ce-e5ca-49af-948f-735d76edbdd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:06:45.671116  300940 system_pods.go:89] "storage-provisioner" [81e22dd7-170e-4dfb-abf8-96dde77438ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:06:45.671128  300940 system_pods.go:126] duration metric: took 3.231952ms to wait for k8s-apps to be running ...
	I1210 23:06:45.671140  300940 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:06:45.671194  300940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:06:45.689223  300940 system_svc.go:56] duration metric: took 18.072117ms WaitForService to wait for kubelet
	I1210 23:06:45.689257  300940 kubeadm.go:587] duration metric: took 3.604374238s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:06:45.689280  300940 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:06:45.693023  300940 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:06:45.693063  300940 node_conditions.go:123] node cpu capacity is 8
	I1210 23:06:45.693080  300940 node_conditions.go:105] duration metric: took 3.794537ms to run NodePressure ...
	I1210 23:06:45.693095  300940 start.go:242] waiting for startup goroutines ...
	I1210 23:06:45.693106  300940 start.go:247] waiting for cluster config update ...
	I1210 23:06:45.693121  300940 start.go:256] writing updated cluster config ...
	I1210 23:06:45.693473  300940 ssh_runner.go:195] Run: rm -f paused
	I1210 23:06:45.698727  300940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:06:45.704389  300940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s8zsm" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 23:06:47.716321  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.587309438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.590870202Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3f6be5a7-85cd-4be2-97c5-8fd061b3e005 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.591438841Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c768b551-0486-4544-93eb-4d16ba906717 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.593070337Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.593718797Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.594885105Z" level=info msg="Ran pod sandbox 09c842d6094c8797613b5462917d431a99888d3cfdcb034e744df9acaff64af4 with infra container: kube-system/kube-proxy-b8hgz/POD" id=3f6be5a7-85cd-4be2-97c5-8fd061b3e005 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.595686319Z" level=info msg="Ran pod sandbox b9e3efecea7d5a6dab765f79bf491c5b60d32268d98e6c70cfb18f3d94b60dd5 with infra container: kube-system/kindnet-qnlhj/POD" id=c768b551-0486-4544-93eb-4d16ba906717 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.598095298Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=570b04be-60c1-476a-90ae-ff570cc2a11b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.598112673Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=795b4cca-6e36-4f4e-a724-21008e5755b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.599800309Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=354c8f2d-bdcd-4d6b-a365-a5c81f087f7e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.600921553Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=33b8fddd-c9bd-4d86-bbb2-362754996a35 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.602741368Z" level=info msg="Creating container: kube-system/kube-proxy-b8hgz/kube-proxy" id=e5d5ecc2-e4d7-41d6-8741-33975fba48f2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.602877743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.603340105Z" level=info msg="Creating container: kube-system/kindnet-qnlhj/kindnet-cni" id=de144ed7-65ab-4568-a4d5-ace508c26edf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.603504908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.61144688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.612387316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.612599983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.613252055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.656443462Z" level=info msg="Created container 3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90: kube-system/kindnet-qnlhj/kindnet-cni" id=de144ed7-65ab-4568-a4d5-ace508c26edf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.657475548Z" level=info msg="Starting container: 3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90" id=a1dc7609-b589-4a50-af38-dbd2c0d8fd74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.659745242Z" level=info msg="Started container" PID=1053 containerID=3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90 description=kube-system/kindnet-qnlhj/kindnet-cni id=a1dc7609-b589-4a50-af38-dbd2c0d8fd74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9e3efecea7d5a6dab765f79bf491c5b60d32268d98e6c70cfb18f3d94b60dd5
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.66537584Z" level=info msg="Created container 818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c: kube-system/kube-proxy-b8hgz/kube-proxy" id=e5d5ecc2-e4d7-41d6-8741-33975fba48f2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.666427814Z" level=info msg="Starting container: 818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c" id=c81172ab-9655-4f58-b716-2435c8328a1f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:06:43 newest-cni-852445 crio[519]: time="2025-12-10T23:06:43.670140874Z" level=info msg="Started container" PID=1054 containerID=818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c description=kube-system/kube-proxy-b8hgz/kube-proxy id=c81172ab-9655-4f58-b716-2435c8328a1f name=/runtime.v1.RuntimeService/StartContainer sandboxID=09c842d6094c8797613b5462917d431a99888d3cfdcb034e744df9acaff64af4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3c995f571ddc8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   b9e3efecea7d5       kindnet-qnlhj                               kube-system
	818179dd96eb8       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   09c842d6094c8       kube-proxy-b8hgz                            kube-system
	03e6e0b39697a       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   10 seconds ago      Running             kube-apiserver            1                   2f596549ff3fe       kube-apiserver-newest-cni-852445            kube-system
	d827e3c942930       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   10 seconds ago      Running             kube-scheduler            1                   619f1521ac12c       kube-scheduler-newest-cni-852445            kube-system
	e9fc0c904d79f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   10 seconds ago      Running             kube-controller-manager   1                   3e9e3f6edfd04       kube-controller-manager-newest-cni-852445   kube-system
	3927c2b5bd86d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   10 seconds ago      Running             etcd                      1                   2d47b6688fece       etcd-newest-cni-852445                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-852445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=newest-cni-852445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:06:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 23:06:42 +0000   Wed, 10 Dec 2025 23:06:17 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852445
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                0c48784b-8da6-4402-a03e-1f05808f1702
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852445                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-qnlhj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-newest-cni-852445             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-852445    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-b8hgz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-newest-cni-852445             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  25s   node-controller  Node newest-cni-852445 event: Registered Node newest-cni-852445 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-852445 event: Registered Node newest-cni-852445 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [3927c2b5bd86d01f5a79b906bdf10b3f05d0a9e5d4b82176a34b00dc3749f189] <==
	{"level":"warn","ts":"2025-12-10T23:06:41.304854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.320605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.331283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.340884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.350219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.361263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.367255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.377328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.384694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.401180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.407093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.416029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.423900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.432196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.440240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.449551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.468225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.477555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.486418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.499328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.511991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.520860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.532362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.541941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:41.613894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33572","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:06:50 up 49 min,  0 user,  load average: 9.62, 4.27, 2.37
	Linux newest-cni-852445 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c995f571ddc84e24d2419d7acecdfb46d3f970a5475fb23c5103d44213bdb90] <==
	I1210 23:06:43.941078       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:43.941520       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 23:06:43.941716       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:43.941793       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:43.941822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:44.243732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:44.243867       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:44.243887       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:44.244067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:06:44.637061       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:06:44.637103       1 metrics.go:72] Registering metrics
	I1210 23:06:44.637201       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [03e6e0b39697ad32b6c054454ba922a2a7d2d409e66d7fb65b0e7721cb77ee5c] <==
	I1210 23:06:42.498843       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 23:06:42.501706       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 23:06:42.501809       1 aggregator.go:187] initial CRD sync complete...
	I1210 23:06:42.501822       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:06:42.501830       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:06:42.501836       1 cache.go:39] Caches are synced for autoregister controller
	E1210 23:06:42.506845       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 23:06:42.525138       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:06:42.540723       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:42.540810       1 policy_source.go:248] refreshing policies
	I1210 23:06:42.589296       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:42.978321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:43.029536       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:43.063816       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:43.075220       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:43.090012       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:43.164733       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.105.251"}
	I1210 23:06:43.186722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.53.19"}
	I1210 23:06:43.287926       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 23:06:46.011867       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:46.011946       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:46.058464       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:06:46.108995       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:46.108993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:46.260927       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e9fc0c904d79f0d15189402866acbbebc372cb0b8dd8cc994ded2c94fbbc92ea] <==
	I1210 23:06:45.589227       1 range_allocator.go:177] "Sending events to api server"
	I1210 23:06:45.581419       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.589299       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-852445"
	I1210 23:06:45.589389       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 23:06:45.589397       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:45.589404       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581558       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.589472       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 23:06:45.582225       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582201       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581850       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581492       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.581918       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582036       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582089       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.584119       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:45.580886       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582898       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.583169       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582358       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.582519       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.678774       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:45.678800       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 23:06:45.678808       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 23:06:45.690232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [818179dd96eb81e646b0b2ec44b51361280c87f61991e284f06d8201d27a711c] <==
	I1210 23:06:43.722962       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:43.806606       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:43.907319       1 shared_informer.go:377] "Caches are synced"
	I1210 23:06:43.907355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 23:06:43.907463       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:43.958535       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:43.958725       1 server_linux.go:136] "Using iptables Proxier"
	I1210 23:06:43.966878       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:43.967951       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 23:06:43.967989       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:43.969353       1 config.go:200] "Starting service config controller"
	I1210 23:06:43.969424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:43.969498       1 config.go:309] "Starting node config controller"
	I1210 23:06:43.969504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:43.969511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:43.969865       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:43.969876       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:43.969893       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:43.969897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:44.070580       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:06:44.070951       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:44.070975       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d827e3c9429302cf91d9cbded5781623c9fbd60ad97a0dddec2398453e0b34ef] <==
	I1210 23:06:40.268778       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:42.362918       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:42.363036       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:42.363052       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:42.363062       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:42.479112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 23:06:42.479151       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:42.502846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:42.502883       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 23:06:42.506069       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:42.506165       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:06:42.603062       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.587858     669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: E1210 23:06:42.605028     669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-852445\" already exists" pod="kube-system/kube-apiserver-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.606500     669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: E1210 23:06:42.630581     669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-852445\" already exists" pod="kube-system/kube-controller-manager-newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644087     669 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644332     669 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-852445"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.644472     669 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 10 23:06:42 newest-cni-852445 kubelet[669]: I1210 23:06:42.646868     669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.267910     669 apiserver.go:52] "Watching apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.281294     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-852445" containerName="etcd"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.281765     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.282058     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.282334     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: E1210 23:06:43.366626     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-852445" containerName="kube-apiserver"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.369175     669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453884     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-xtables-lock\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453953     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-lib-modules\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.453979     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-lib-modules\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.454066     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6573bdb3-e42a-41f9-b284-370c54e28aec-cni-cfg\") pod \"kindnet-qnlhj\" (UID: \"6573bdb3-e42a-41f9-b284-370c54e28aec\") " pod="kube-system/kindnet-qnlhj"
	Dec 10 23:06:43 newest-cni-852445 kubelet[669]: I1210 23:06:43.455209     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28018116-263f-4460-bef3-54ee0930fde9-xtables-lock\") pod \"kube-proxy-b8hgz\" (UID: \"28018116-263f-4460-bef3-54ee0930fde9\") " pod="kube-system/kube-proxy-b8hgz"
	Dec 10 23:06:44 newest-cni-852445 kubelet[669]: E1210 23:06:44.303559     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-852445" containerName="kube-scheduler"
	Dec 10 23:06:44 newest-cni-852445 kubelet[669]: E1210 23:06:44.495200     669 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-852445" containerName="kube-controller-manager"
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:06:45 newest-cni-852445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-852445 -n newest-cni-852445
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-852445 -n newest-cni-852445: exit status 2 (482.670374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-852445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp: exit status 1 (93.731888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-nlx4t" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6svw4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-tcglp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-852445 describe pod coredns-7d764666f9-nlx4t storage-provisioner dashboard-metrics-scraper-867fb5f87b-6svw4 kubernetes-dashboard-b84665fb8-tcglp: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-468067 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-468067 --alsologtostderr -v=1: exit status 80 (2.437331525s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-468067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:07:28.311836  317529 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:28.312094  317529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:28.312102  317529 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:28.312107  317529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:28.312285  317529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:28.312540  317529 out.go:368] Setting JSON to false
	I1210 23:07:28.312567  317529 mustload.go:66] Loading cluster: embed-certs-468067
	I1210 23:07:28.312977  317529 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:28.313351  317529 cli_runner.go:164] Run: docker container inspect embed-certs-468067 --format={{.State.Status}}
	I1210 23:07:28.332542  317529 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:07:28.332891  317529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:28.401898  317529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 23:07:28.390769126 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:28.402775  317529 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:embed-certs-468067 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=false) v
m-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:07:28.405099  317529 out.go:179] * Pausing node embed-certs-468067 ... 
	I1210 23:07:28.406312  317529 host.go:66] Checking if "embed-certs-468067" exists ...
	I1210 23:07:28.406551  317529 ssh_runner.go:195] Run: systemctl --version
	I1210 23:07:28.406597  317529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468067
	I1210 23:07:28.425861  317529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/embed-certs-468067/id_rsa Username:docker}
	I1210 23:07:28.523709  317529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:28.545338  317529 pause.go:52] kubelet running: true
	I1210 23:07:28.545407  317529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:28.705787  317529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:28.705913  317529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:28.776674  317529 cri.go:89] found id: "45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140"
	I1210 23:07:28.776697  317529 cri.go:89] found id: "3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce"
	I1210 23:07:28.776702  317529 cri.go:89] found id: "9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	I1210 23:07:28.776707  317529 cri.go:89] found id: "db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0"
	I1210 23:07:28.776710  317529 cri.go:89] found id: "a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8"
	I1210 23:07:28.776715  317529 cri.go:89] found id: "7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8"
	I1210 23:07:28.776719  317529 cri.go:89] found id: "7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45"
	I1210 23:07:28.776723  317529 cri.go:89] found id: "01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192"
	I1210 23:07:28.776728  317529 cri.go:89] found id: "4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b"
	I1210 23:07:28.776756  317529 cri.go:89] found id: "ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	I1210 23:07:28.776765  317529 cri.go:89] found id: "c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e"
	I1210 23:07:28.776769  317529 cri.go:89] found id: ""
	I1210 23:07:28.776818  317529 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:28.789219  317529 retry.go:31] will retry after 252.070547ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:28Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:07:29.041692  317529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:29.054825  317529 pause.go:52] kubelet running: false
	I1210 23:07:29.054899  317529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:29.201487  317529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:29.201552  317529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:29.270954  317529 cri.go:89] found id: "45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140"
	I1210 23:07:29.270979  317529 cri.go:89] found id: "3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce"
	I1210 23:07:29.270985  317529 cri.go:89] found id: "9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	I1210 23:07:29.270991  317529 cri.go:89] found id: "db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0"
	I1210 23:07:29.270995  317529 cri.go:89] found id: "a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8"
	I1210 23:07:29.271000  317529 cri.go:89] found id: "7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8"
	I1210 23:07:29.271005  317529 cri.go:89] found id: "7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45"
	I1210 23:07:29.271010  317529 cri.go:89] found id: "01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192"
	I1210 23:07:29.271014  317529 cri.go:89] found id: "4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b"
	I1210 23:07:29.271023  317529 cri.go:89] found id: "ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	I1210 23:07:29.271030  317529 cri.go:89] found id: "c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e"
	I1210 23:07:29.271035  317529 cri.go:89] found id: ""
	I1210 23:07:29.271083  317529 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:29.283075  317529 retry.go:31] will retry after 394.531569ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:29Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:07:29.678572  317529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:29.693953  317529 pause.go:52] kubelet running: false
	I1210 23:07:29.694020  317529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:29.841829  317529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:29.841924  317529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:29.909185  317529 cri.go:89] found id: "45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140"
	I1210 23:07:29.909208  317529 cri.go:89] found id: "3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce"
	I1210 23:07:29.909215  317529 cri.go:89] found id: "9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	I1210 23:07:29.909220  317529 cri.go:89] found id: "db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0"
	I1210 23:07:29.909224  317529 cri.go:89] found id: "a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8"
	I1210 23:07:29.909229  317529 cri.go:89] found id: "7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8"
	I1210 23:07:29.909234  317529 cri.go:89] found id: "7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45"
	I1210 23:07:29.909238  317529 cri.go:89] found id: "01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192"
	I1210 23:07:29.909244  317529 cri.go:89] found id: "4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b"
	I1210 23:07:29.909252  317529 cri.go:89] found id: "ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	I1210 23:07:29.909256  317529 cri.go:89] found id: "c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e"
	I1210 23:07:29.909261  317529 cri.go:89] found id: ""
	I1210 23:07:29.909306  317529 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:29.921108  317529 retry.go:31] will retry after 499.442146ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:29Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:07:30.420759  317529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:30.437294  317529 pause.go:52] kubelet running: false
	I1210 23:07:30.437362  317529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:30.593151  317529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:30.593264  317529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:30.664184  317529 cri.go:89] found id: "45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140"
	I1210 23:07:30.664210  317529 cri.go:89] found id: "3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce"
	I1210 23:07:30.664215  317529 cri.go:89] found id: "9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	I1210 23:07:30.664224  317529 cri.go:89] found id: "db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0"
	I1210 23:07:30.664228  317529 cri.go:89] found id: "a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8"
	I1210 23:07:30.664231  317529 cri.go:89] found id: "7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8"
	I1210 23:07:30.664234  317529 cri.go:89] found id: "7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45"
	I1210 23:07:30.664236  317529 cri.go:89] found id: "01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192"
	I1210 23:07:30.664239  317529 cri.go:89] found id: "4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b"
	I1210 23:07:30.664244  317529 cri.go:89] found id: "ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	I1210 23:07:30.664247  317529 cri.go:89] found id: "c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e"
	I1210 23:07:30.664249  317529 cri.go:89] found id: ""
	I1210 23:07:30.664295  317529 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:30.678561  317529 out.go:203] 
	W1210 23:07:30.679816  317529 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:07:30.679840  317529 out.go:285] * 
	* 
	W1210 23:07:30.684184  317529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:07:30.685398  317529 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-468067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-468067
helpers_test.go:244: (dbg) docker inspect embed-certs-468067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	        "Created": "2025-12-10T23:05:20.332136032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:26.235606299Z",
	            "FinishedAt": "2025-12-10T23:06:25.173722576Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8-json.log",
	        "Name": "/embed-certs-468067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-468067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-468067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	                "LowerDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-468067",
	                "Source": "/var/lib/docker/volumes/embed-certs-468067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-468067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-468067",
	                "name.minikube.sigs.k8s.io": "embed-certs-468067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "125aee1280051d16924c9f7c269c3a9df1264a40d93a51a696a6f3321fd932e3",
	            "SandboxKey": "/var/run/docker/netns/125aee128005",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-468067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62dd6bab6a632f3b3d47ad53284a920285184de444b92fe6a92c9c747bea6de0",
	                    "EndpointID": "6c970c051d33867cd4ad15e5aa499b385f59a7417a1042716b2639f1cf88af6e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "56:fc:78:31:24:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-468067",
	                        "4b27d4853e79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067: exit status 2 (359.977024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25: (1.138088376s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-177285 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat docker --no-pager                                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/docker/daemon.json                                                                                        │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo docker system info                                                                                                 │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cri-dockerd --version                                                                                              │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat containerd --no-pager                                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/containerd/config.toml                                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo containerd config dump                                                                                             │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl cat crio --no-pager                                                                                      │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo crio config                                                                                                        │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ delete  │ -p auto-177285                                                                                                                         │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-177285      │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ image   │ embed-certs-468067 image list --format=json                                                                                            │ embed-certs-468067 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p embed-certs-468067 --alsologtostderr -v=1                                                                                           │ embed-certs-468067 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:07:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:07:18.905885  314972 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:18.906156  314972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:18.906165  314972 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:18.906169  314972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:18.906379  314972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:18.906899  314972 out.go:368] Setting JSON to false
	I1210 23:07:18.908313  314972 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2981,"bootTime":1765405058,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:07:18.908405  314972 start.go:143] virtualization: kvm guest
	I1210 23:07:18.910523  314972 out.go:179] * [calico-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:07:18.912052  314972 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:07:18.912087  314972 notify.go:221] Checking for updates...
	I1210 23:07:18.915131  314972 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:07:18.916637  314972 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:18.918016  314972 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:07:18.919467  314972 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:07:18.920812  314972 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1210 23:07:14.710717  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	W1210 23:07:17.209215  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	I1210 23:07:18.922504  314972 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922591  314972 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922712  314972 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922835  314972 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:07:18.950342  314972 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:07:18.950430  314972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:19.013070  314972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:19.000925488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:19.013223  314972 docker.go:319] overlay module found
	I1210 23:07:19.014904  314972 out.go:179] * Using the docker driver based on user configuration
	I1210 23:07:19.016154  314972 start.go:309] selected driver: docker
	I1210 23:07:19.016168  314972 start.go:927] validating driver "docker" against <nil>
	I1210 23:07:19.016180  314972 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:07:19.016745  314972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:19.089346  314972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:19.077247114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:19.089552  314972 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:07:19.089811  314972 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:07:19.091581  314972 out.go:179] * Using Docker driver with root privileges
	I1210 23:07:19.092956  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:19.092978  314972 start_flags.go:351] Found "Calico" CNI - setting NetworkPlugin=cni
	I1210 23:07:19.093071  314972 start.go:353] cluster config:
	{Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:19.094585  314972 out.go:179] * Starting "calico-177285" primary control-plane node in "calico-177285" cluster
	I1210 23:07:19.095891  314972 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:07:19.097206  314972 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:07:19.098418  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:19.098454  314972 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:07:19.098466  314972 cache.go:65] Caching tarball of preloaded images
	I1210 23:07:19.098517  314972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:07:19.098573  314972 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:07:19.098584  314972 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:07:19.098716  314972 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json ...
	I1210 23:07:19.098740  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json: {Name:mk6416601240975ffd879732783771c8d4925824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.120152  314972 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:07:19.120171  314972 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:07:19.120187  314972 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:07:19.120217  314972 start.go:360] acquireMachinesLock for calico-177285: {Name:mkec978e6c01edaf68f82fc8eab571694440f319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:07:19.120310  314972 start.go:364] duration metric: took 78.645µs to acquireMachinesLock for "calico-177285"
	I1210 23:07:19.120342  314972 start.go:93] Provisioning new machine with config: &{Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:19.120428  314972 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:07:15.501223  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:16.001031  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:16.501194  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:17.001489  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:17.500682  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:18.001430  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:18.500778  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.001381  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.501688  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.613009  307880 kubeadm.go:1114] duration metric: took 4.707585486s to wait for elevateKubeSystemPrivileges
	I1210 23:07:19.613051  307880 kubeadm.go:403] duration metric: took 15.17044723s to StartCluster
	I1210 23:07:19.613078  307880 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.613146  307880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:19.615707  307880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.615972  307880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:07:19.615984  307880 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:19.616072  307880 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:07:19.616173  307880 addons.go:70] Setting storage-provisioner=true in profile "kindnet-177285"
	I1210 23:07:19.616197  307880 addons.go:239] Setting addon storage-provisioner=true in "kindnet-177285"
	I1210 23:07:19.616189  307880 addons.go:70] Setting default-storageclass=true in profile "kindnet-177285"
	I1210 23:07:19.616226  307880 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-177285"
	I1210 23:07:19.616247  307880 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:19.616234  307880 host.go:66] Checking if "kindnet-177285" exists ...
	I1210 23:07:19.616723  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.617018  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.617939  307880 out.go:179] * Verifying Kubernetes components...
	I1210 23:07:19.620184  307880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:19.646838  307880 addons.go:239] Setting addon default-storageclass=true in "kindnet-177285"
	I1210 23:07:19.646890  307880 host.go:66] Checking if "kindnet-177285" exists ...
	I1210 23:07:19.647363  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.647816  307880 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:07:19.649446  307880 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:07:19.649464  307880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:07:19.649517  307880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-177285
	I1210 23:07:19.678501  307880 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:07:19.678526  307880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:07:19.678604  307880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-177285
	I1210 23:07:19.683114  307880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/kindnet-177285/id_rsa Username:docker}
	I1210 23:07:19.708427  307880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/kindnet-177285/id_rsa Username:docker}
	I1210 23:07:19.728167  307880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:07:19.800773  307880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:07:19.809898  307880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:07:19.850302  307880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:07:19.939498  307880 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 23:07:19.942546  307880 node_ready.go:35] waiting up to 15m0s for node "kindnet-177285" to be "Ready" ...
	I1210 23:07:20.168027  307880 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:07:20.169414  307880 addons.go:530] duration metric: took 553.337304ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:07:19.122525  314972 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:07:19.122757  314972 start.go:159] libmachine.API.Create for "calico-177285" (driver="docker")
	I1210 23:07:19.122786  314972 client.go:173] LocalClient.Create starting
	I1210 23:07:19.122867  314972 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:07:19.122905  314972 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:19.122934  314972 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:19.122996  314972 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:07:19.123032  314972 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:19.123051  314972 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:19.123460  314972 cli_runner.go:164] Run: docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:07:19.141984  314972 cli_runner.go:211] docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:07:19.142104  314972 network_create.go:284] running [docker network inspect calico-177285] to gather additional debugging logs...
	I1210 23:07:19.142131  314972 cli_runner.go:164] Run: docker network inspect calico-177285
	W1210 23:07:19.161001  314972 cli_runner.go:211] docker network inspect calico-177285 returned with exit code 1
	I1210 23:07:19.161030  314972 network_create.go:287] error running [docker network inspect calico-177285]: docker network inspect calico-177285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-177285 not found
	I1210 23:07:19.161063  314972 network_create.go:289] output of [docker network inspect calico-177285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-177285 not found
	
	** /stderr **
	I1210 23:07:19.161225  314972 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:07:19.179738  314972 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:07:19.180568  314972 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:07:19.181273  314972 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:07:19.181826  314972 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8875699386e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:89:d4:9b:b9:bc} reservation:<nil>}
	I1210 23:07:19.182601  314972 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a5b1d987b87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ca:3e:51:dc:a7:74} reservation:<nil>}
	I1210 23:07:19.183515  314972 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e804c0}
	I1210 23:07:19.183541  314972 network_create.go:124] attempt to create docker network calico-177285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:07:19.183590  314972 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-177285 calico-177285
	I1210 23:07:19.234988  314972 network_create.go:108] docker network calico-177285 192.168.94.0/24 created
	I1210 23:07:19.235027  314972 kic.go:121] calculated static IP "192.168.94.2" for the "calico-177285" container
	I1210 23:07:19.235108  314972 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:07:19.253724  314972 cli_runner.go:164] Run: docker volume create calico-177285 --label name.minikube.sigs.k8s.io=calico-177285 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:07:19.277887  314972 oci.go:103] Successfully created a docker volume calico-177285
	I1210 23:07:19.277968  314972 cli_runner.go:164] Run: docker run --rm --name calico-177285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-177285 --entrypoint /usr/bin/test -v calico-177285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:07:19.770444  314972 oci.go:107] Successfully prepared a docker volume calico-177285
	I1210 23:07:19.770535  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:19.770547  314972 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:07:19.770634  314972 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:07:23.761451  314972 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.990725779s)
	I1210 23:07:23.761486  314972 kic.go:203] duration metric: took 3.990934724s to extract preloaded images to volume ...
	W1210 23:07:23.761581  314972 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:07:23.761622  314972 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:07:23.761714  314972 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:07:23.816883  314972 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-177285 --name calico-177285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-177285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-177285 --network calico-177285 --ip 192.168.94.2 --volume calico-177285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	W1210 23:07:19.210397  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	W1210 23:07:21.709867  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	I1210 23:07:20.445017  307880 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-177285" context rescaled to 1 replicas
	W1210 23:07:21.951965  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	W1210 23:07:24.445815  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	I1210 23:07:24.211361  300940 pod_ready.go:94] pod "coredns-66bc5c9577-s8zsm" is "Ready"
	I1210 23:07:24.211392  300940 pod_ready.go:86] duration metric: took 38.506972442s for pod "coredns-66bc5c9577-s8zsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.215494  300940 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.219818  300940 pod_ready.go:94] pod "etcd-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.219838  300940 pod_ready.go:86] duration metric: took 4.315292ms for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.222037  300940 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.226334  300940 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.226358  300940 pod_ready.go:86] duration metric: took 4.29735ms for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.228343  300940 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.409624  300940 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.409689  300940 pod_ready.go:86] duration metric: took 181.32439ms for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.609008  300940 pod_ready.go:83] waiting for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.008105  300940 pod_ready.go:94] pod "kube-proxy-lwnhd" is "Ready"
	I1210 23:07:25.008129  300940 pod_ready.go:86] duration metric: took 399.096739ms for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.208554  300940 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.608564  300940 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:25.608592  300940 pod_ready.go:86] duration metric: took 400.012915ms for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.608607  300940 pod_ready.go:40] duration metric: took 39.909835455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:07:25.655067  300940 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:07:25.657006  300940 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-443884" cluster and "default" namespace by default
	I1210 23:07:24.105423  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Running}}
	I1210 23:07:24.124260  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.143178  314972 cli_runner.go:164] Run: docker exec calico-177285 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:07:24.197782  314972 oci.go:144] the created container "calico-177285" has a running status.
	I1210 23:07:24.197812  314972 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa...
	I1210 23:07:24.526853  314972 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:07:24.553179  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.571980  314972 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:07:24.572005  314972 kic_runner.go:114] Args: [docker exec --privileged calico-177285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:07:24.620357  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.639747  314972 machine.go:94] provisionDockerMachine start ...
	I1210 23:07:24.639863  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.657849  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.658208  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.658232  314972 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:07:24.793637  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-177285
	
	I1210 23:07:24.793698  314972 ubuntu.go:182] provisioning hostname "calico-177285"
	I1210 23:07:24.793772  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.812270  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.812504  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.812519  314972 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-177285 && echo "calico-177285" | sudo tee /etc/hostname
	I1210 23:07:24.957099  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-177285
	
	I1210 23:07:24.957195  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.977178  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.977455  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.977484  314972 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-177285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-177285/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-177285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:07:25.112347  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:07:25.112374  314972 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:07:25.112428  314972 ubuntu.go:190] setting up certificates
	I1210 23:07:25.112445  314972 provision.go:84] configureAuth start
	I1210 23:07:25.112497  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:25.130465  314972 provision.go:143] copyHostCerts
	I1210 23:07:25.130539  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:07:25.130553  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:07:25.130632  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:07:25.130779  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:07:25.130794  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:07:25.130836  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:07:25.130983  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:07:25.130999  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:07:25.131036  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:07:25.131111  314972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.calico-177285 san=[127.0.0.1 192.168.94.2 calico-177285 localhost minikube]
	I1210 23:07:25.226095  314972 provision.go:177] copyRemoteCerts
	I1210 23:07:25.226156  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:07:25.226197  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.243879  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.340009  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:07:25.359908  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 23:07:25.378132  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:07:25.397146  314972 provision.go:87] duration metric: took 284.687738ms to configureAuth
	I1210 23:07:25.397174  314972 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:07:25.397363  314972 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:25.397476  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.416165  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:25.416399  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:25.416423  314972 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:07:25.695420  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:07:25.695447  314972 machine.go:97] duration metric: took 1.055678209s to provisionDockerMachine
	I1210 23:07:25.695461  314972 client.go:176] duration metric: took 6.572668717s to LocalClient.Create
	I1210 23:07:25.695483  314972 start.go:167] duration metric: took 6.572724957s to libmachine.API.Create "calico-177285"
	I1210 23:07:25.695496  314972 start.go:293] postStartSetup for "calico-177285" (driver="docker")
	I1210 23:07:25.695509  314972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:07:25.695578  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:07:25.695626  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.717920  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.818661  314972 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:07:25.822152  314972 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:07:25.822176  314972 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:07:25.822188  314972 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:07:25.822234  314972 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:07:25.822306  314972 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:07:25.822407  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:07:25.829937  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:07:25.850375  314972 start.go:296] duration metric: took 154.867293ms for postStartSetup
	I1210 23:07:25.850756  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:25.868605  314972 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json ...
	I1210 23:07:25.868909  314972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:07:25.868960  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.888202  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.984533  314972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:07:25.989221  314972 start.go:128] duration metric: took 6.868779855s to createHost
	I1210 23:07:25.989246  314972 start.go:83] releasing machines lock for "calico-177285", held for 6.868924467s
	I1210 23:07:25.989316  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:26.008356  314972 ssh_runner.go:195] Run: cat /version.json
	I1210 23:07:26.008414  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:26.008439  314972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:07:26.008528  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:26.028505  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:26.029363  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:26.176586  314972 ssh_runner.go:195] Run: systemctl --version
	I1210 23:07:26.183271  314972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:07:26.220365  314972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:07:26.225510  314972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:07:26.225578  314972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:07:26.252731  314972 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:07:26.252752  314972 start.go:496] detecting cgroup driver to use...
	I1210 23:07:26.252781  314972 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:07:26.252819  314972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:07:26.270171  314972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:07:26.282627  314972 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:07:26.282711  314972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:07:26.299694  314972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:07:26.317579  314972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:07:26.402291  314972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:07:26.489609  314972 docker.go:234] disabling docker service ...
	I1210 23:07:26.489700  314972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:07:26.508606  314972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:07:26.521601  314972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:07:26.611626  314972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:07:26.693181  314972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:07:26.705493  314972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:07:26.719599  314972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:07:26.719680  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.729960  314972 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:07:26.730018  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.738804  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.747571  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.756600  314972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:07:26.764695  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.773351  314972 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.786959  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.795783  314972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:07:26.803376  314972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:07:26.810984  314972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:26.889469  314972 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:07:27.017546  314972 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:07:27.017606  314972 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:07:27.021633  314972 start.go:564] Will wait 60s for crictl version
	I1210 23:07:27.021709  314972 ssh_runner.go:195] Run: which crictl
	I1210 23:07:27.025369  314972 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:07:27.049052  314972 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:07:27.049125  314972 ssh_runner.go:195] Run: crio --version
	I1210 23:07:27.077144  314972 ssh_runner.go:195] Run: crio --version
	I1210 23:07:27.109180  314972 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:07:27.110340  314972 cli_runner.go:164] Run: docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:07:27.129692  314972 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 23:07:27.133804  314972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:07:27.143771  314972 kubeadm.go:884] updating cluster {Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:07:27.143896  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:27.143952  314972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:07:27.175927  314972 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:07:27.175949  314972 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:07:27.176007  314972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:07:27.200987  314972 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:07:27.201011  314972 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:07:27.201019  314972 kubeadm.go:935] updating node { 192.168.94.2  8443 v1.34.2 crio true true} ...
	I1210 23:07:27.201107  314972 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-177285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1210 23:07:27.201170  314972 ssh_runner.go:195] Run: crio config
	I1210 23:07:27.247062  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:27.247095  314972 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:07:27.247117  314972 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-177285 NodeName:calico-177285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:07:27.247230  314972 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-177285"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:07:27.247291  314972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:07:27.255292  314972 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:07:27.255355  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:07:27.263302  314972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 23:07:27.276009  314972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:07:27.291693  314972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 23:07:27.303980  314972 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:07:27.307639  314972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:07:27.317515  314972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:27.398511  314972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:07:27.422938  314972 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285 for IP: 192.168.94.2
	I1210 23:07:27.422961  314972 certs.go:195] generating shared ca certs ...
	I1210 23:07:27.422980  314972 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.423155  314972 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:07:27.423211  314972 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:07:27.423227  314972 certs.go:257] generating profile certs ...
	I1210 23:07:27.423301  314972 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key
	I1210 23:07:27.423318  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt with IP's: []
	I1210 23:07:27.507838  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt ...
	I1210 23:07:27.507865  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt: {Name:mk54aaa2857e73ea1f27c44ac0ee422854265672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.508045  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key ...
	I1210 23:07:27.508056  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key: {Name:mkf188842d56ae0fbb344d028c40f8bf650ed9a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.508145  314972 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4
	I1210 23:07:27.508160  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1210 23:07:27.593780  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 ...
	I1210 23:07:27.593806  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4: {Name:mk7471040ab55b0a380d85ed8204b36de60e471a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.593965  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4 ...
	I1210 23:07:27.593978  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4: {Name:mkfd1461d452c5b3a5b938ad0a32ca6e345651a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.594057  314972 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt
	I1210 23:07:27.594133  314972 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key
	I1210 23:07:27.594193  314972 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key
	I1210 23:07:27.594212  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt with IP's: []
	I1210 23:07:27.762794  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt ...
	I1210 23:07:27.762821  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt: {Name:mk79496a48285a75568f813fed65772cc8edcfa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.762989  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key ...
	I1210 23:07:27.763000  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key: {Name:mk8606d0ba3e44ad80c3fb12fa05a6474606f962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.763180  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:07:27.763222  314972 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:07:27.763232  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:07:27.763259  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:07:27.763285  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:07:27.763308  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:07:27.763356  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:07:27.763966  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:07:27.782817  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:07:27.800488  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:07:27.818773  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:07:27.836456  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:07:27.854019  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:07:27.871804  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:07:27.889729  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 23:07:27.908382  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:07:27.928441  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:07:27.947361  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:07:27.968371  314972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:07:27.981847  314972 ssh_runner.go:195] Run: openssl version
	I1210 23:07:27.988326  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:07:27.996996  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:07:28.005612  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.009970  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.010031  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.047914  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:07:28.056065  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:07:28.064673  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.073591  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:07:28.081390  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.085164  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.085222  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.125016  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:07:28.133427  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:07:28.141124  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.149181  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:07:28.157136  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.161135  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.161186  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.204410  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:07:28.213763  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:07:28.221741  314972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:07:28.225662  314972 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:07:28.225730  314972 kubeadm.go:401] StartCluster: {Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:28.225821  314972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:07:28.225874  314972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:07:28.256915  314972 cri.go:89] found id: ""
	I1210 23:07:28.256984  314972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:07:28.265459  314972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:07:28.273860  314972 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:07:28.273915  314972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:07:28.282956  314972 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:07:28.282976  314972 kubeadm.go:158] found existing configuration files:
	
	I1210 23:07:28.283022  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:07:28.292033  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:07:28.292090  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:07:28.300223  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:07:28.308995  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:07:28.309052  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:07:28.316744  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:07:28.324607  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:07:28.324695  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:07:28.333238  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:07:28.341573  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:07:28.341631  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:07:28.350531  314972 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:07:28.420237  314972 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:07:28.484893  314972 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 23:07:26.446732  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	W1210 23:07:28.446778  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753505433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753660385Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/00e7c71d69a49d9b98c62b7a90d00ebc6efa1f41178aec0080b472ec20f4f410/merged/etc/passwd: no such file or directory"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753688158Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/00e7c71d69a49d9b98c62b7a90d00ebc6efa1f41178aec0080b472ec20f4f410/merged/etc/group: no such file or directory"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753972677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.781758955Z" level=info msg="Created container 45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140: kube-system/storage-provisioner/storage-provisioner" id=fb380cca-7177-4b0d-8e50-ad26c4bee50d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.782392156Z" level=info msg="Starting container: 45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140" id=d480947f-4783-49ec-abbc-837379251c00 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.784672605Z" level=info msg="Started container" PID=1711 containerID=45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140 description=kube-system/storage-provisioner/storage-provisioner id=d480947f-4783-49ec-abbc-837379251c00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=04175651feb158c698907f0f23ae069739ee6f65659b3e8639f896973fe2cfaf
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.437029075Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441209703Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441241429Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441268407Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444895628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444925624Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444943813Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.448491574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.44851379Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.448535719Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452051091Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452074675Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452095009Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455454689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455477601Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455497549Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.459111538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.459137569Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	45fef78fec697       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   04175651feb15       storage-provisioner                          kube-system
	ca3188ebca191       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   a3d430ea85943       dashboard-metrics-scraper-6ffb444bf9-tqmd5   kubernetes-dashboard
	c437e7c17bd73       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   7931270c7f412       kubernetes-dashboard-855c9754f9-4l5m7        kubernetes-dashboard
	3511a11b6bb3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   c658845b30623       coredns-66bc5c9577-qw48c                     kube-system
	0d182ea0c7d43       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   d71c0ae2f17bd       busybox                                      default
	9565be37ba4bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   04175651feb15       storage-provisioner                          kube-system
	db93da31acfff       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   9b8200f5a08d2       kube-proxy-27pft                             kube-system
	a043df7068ef6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f16fe216e294c       kindnet-dkdlj                                kube-system
	7106cbbed2e17       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   00f20f9bfd3a5       kube-apiserver-embed-certs-468067            kube-system
	7a770e31c3cb5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   1112cd9bb8721       kube-controller-manager-embed-certs-468067   kube-system
	01e91a1d6729c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   ef9dac8446d39       etcd-embed-certs-468067                      kube-system
	4e26510798550       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   42fbba0572928       kube-scheduler-embed-certs-468067            kube-system
	
	
	==> coredns [3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49304 - 47449 "HINFO IN 991423004734752759.2042785437316989899. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028641651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-468067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-468067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=embed-certs-468067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-468067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-468067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                d2cd28f2-4471-41b6-a37d-4eadfd61fbb3
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-qw48c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-468067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-dkdlj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-468067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-468067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-27pft                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-468067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tqmd5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l5m7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-468067 event: Registered Node embed-certs-468067 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-468067 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                  node-controller  Node embed-certs-468067 event: Registered Node embed-certs-468067 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192] <==
	{"level":"warn","ts":"2025-12-10T23:06:34.094598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.103908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.114699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.124127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.133400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.141879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.150673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.169902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.187056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.200797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.213234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.225395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.235168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.243245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.251741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.261292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.269932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.279545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.292361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.300866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.309977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.331881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.348010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.348368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.414551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:07:31 up 49 min,  0 user,  load average: 7.49, 4.39, 2.49
	Linux embed-certs-468067 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8] <==
	I1210 23:06:36.235019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:36.235282       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 23:06:36.235429       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:36.235449       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:36.235476       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:36.436199       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:36.436363       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:36.436441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:36.436949       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 23:07:06.437390       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 23:07:06.437408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 23:07:06.437394       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 23:07:06.437395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1210 23:07:07.936731       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:07:07.936759       1 metrics.go:72] Registering metrics
	I1210 23:07:07.936823       1 controller.go:711] "Syncing nftables rules"
	I1210 23:07:16.436710       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:07:16.436768       1 main.go:301] handling current node
	I1210 23:07:26.444874       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:07:26.444915       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8] <==
	I1210 23:06:35.030354       1 aggregator.go:171] initial CRD sync complete...
	I1210 23:06:35.030399       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:06:35.030424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:06:35.030478       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:06:35.032171       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:06:35.026858       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:06:35.045480       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:06:35.053265       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 23:06:35.053310       1 policy_source.go:240] refreshing policies
	I1210 23:06:35.070225       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:35.094476       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:06:35.094527       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:06:35.325818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:35.352922       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:35.370874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:35.379684       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:35.389858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:35.428679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.120.149"}
	I1210 23:06:35.441229       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.42.102"}
	I1210 23:06:35.896770       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:06:38.548918       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:38.796835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:38.796835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:38.945000       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:06:38.945000       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45] <==
	I1210 23:06:38.478989       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 23:06:38.480180       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 23:06:38.482400       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 23:06:38.483697       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:06:38.485959       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 23:06:38.487477       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:06:38.489716       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:06:38.492131       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 23:06:38.492298       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 23:06:38.492484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 23:06:38.492462       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 23:06:38.492522       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 23:06:38.492575       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 23:06:38.492686       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-468067"
	I1210 23:06:38.492850       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 23:06:38.492885       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 23:06:38.492999       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:06:38.493112       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:06:38.493430       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:06:38.494003       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 23:06:38.495157       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:06:38.499680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:38.509861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:06:38.512998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:38.516139       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0] <==
	I1210 23:06:36.072210       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:36.139564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:06:36.240137       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:06:36.240170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 23:06:36.240264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:36.261042       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:36.261114       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:06:36.266260       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:36.266783       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:06:36.266816       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:36.268373       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:36.268450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:36.268450       1 config.go:200] "Starting service config controller"
	I1210 23:06:36.268471       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:36.268621       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:36.268842       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:36.268716       1 config.go:309] "Starting node config controller"
	I1210 23:06:36.268861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:36.268869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:36.368577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:36.369733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:06:36.369840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b] <==
	I1210 23:06:33.761138       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:34.906908       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:34.906949       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:34.906960       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:34.906969       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:34.988379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 23:06:34.989458       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:34.993481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:34.993523       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:34.994070       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:34.994150       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 23:06:35.008640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found, role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 23:06:35.018213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:06:35.018287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:06:35.018578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:06:35.018600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:06:35.019397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:06:35.019724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:06:35.020051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:06:35.020322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1210 23:06:36.493707       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:06:38 embed-certs-468067 kubelet[723]: I1210 23:06:38.994166     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj75p\" (UniqueName: \"kubernetes.io/projected/cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03-kube-api-access-fj75p\") pod \"dashboard-metrics-scraper-6ffb444bf9-tqmd5\" (UID: \"cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5"
	Dec 10 23:06:38 embed-certs-468067 kubelet[723]: I1210 23:06:38.994193     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c28g\" (UniqueName: \"kubernetes.io/projected/ceb10413-18a8-45d1-9707-8a032353a846-kube-api-access-4c28g\") pod \"kubernetes-dashboard-855c9754f9-4l5m7\" (UID: \"ceb10413-18a8-45d1-9707-8a032353a846\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l5m7"
	Dec 10 23:06:44 embed-certs-468067 kubelet[723]: I1210 23:06:44.751846     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 23:06:44 embed-certs-468067 kubelet[723]: I1210 23:06:44.973494     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l5m7" podStartSLOduration=2.680518683 podStartE2EDuration="6.973473458s" podCreationTimestamp="2025-12-10 23:06:38 +0000 UTC" firstStartedPulling="2025-12-10 23:06:39.248354102 +0000 UTC m=+6.750442308" lastFinishedPulling="2025-12-10 23:06:43.541308867 +0000 UTC m=+11.043397083" observedRunningTime="2025-12-10 23:06:43.683409169 +0000 UTC m=+11.185497383" watchObservedRunningTime="2025-12-10 23:06:44.973473458 +0000 UTC m=+12.475561671"
	Dec 10 23:06:46 embed-certs-468067 kubelet[723]: I1210 23:06:46.675404     723 scope.go:117] "RemoveContainer" containerID="2a11f7edecef74848b8399b55dd957e8c8a08627bf00f019d0cd523dfa713785"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: I1210 23:06:47.688764     723 scope.go:117] "RemoveContainer" containerID="2a11f7edecef74848b8399b55dd957e8c8a08627bf00f019d0cd523dfa713785"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: I1210 23:06:47.689707     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: E1210 23:06:47.689882     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:06:48 embed-certs-468067 kubelet[723]: I1210 23:06:48.692592     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:48 embed-certs-468067 kubelet[723]: E1210 23:06:48.692851     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:06:49 embed-certs-468067 kubelet[723]: I1210 23:06:49.695002     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:49 embed-certs-468067 kubelet[723]: E1210 23:06:49.695212     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.602796     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.726289     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.726488     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: E1210 23:07:00.726705     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:06 embed-certs-468067 kubelet[723]: I1210 23:07:06.746182     723 scope.go:117] "RemoveContainer" containerID="9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	Dec 10 23:07:08 embed-certs-468067 kubelet[723]: I1210 23:07:08.655312     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:08 embed-certs-468067 kubelet[723]: E1210 23:07:08.655584     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:20 embed-certs-468067 kubelet[723]: I1210 23:07:20.602612     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:20 embed-certs-468067 kubelet[723]: E1210 23:07:20.602858     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: kubelet.service: Consumed 1.804s CPU time.
	
	
	==> kubernetes-dashboard [c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e] <==
	2025/12/10 23:06:43 Using namespace: kubernetes-dashboard
	2025/12/10 23:06:43 Using in-cluster config to connect to apiserver
	2025/12/10 23:06:43 Using secret token for csrf signing
	2025/12/10 23:06:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:06:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:06:43 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 23:06:43 Generating JWE encryption key
	2025/12/10 23:06:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:06:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:06:43 Initializing JWE encryption key from synchronized object
	2025/12/10 23:06:43 Creating in-cluster Sidecar client
	2025/12/10 23:06:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:43 Serving insecurely on HTTP port: 9090
	2025/12/10 23:07:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:43 Starting overwatch
	
	
	==> storage-provisioner [45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140] <==
	I1210 23:07:06.797872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:07:06.806991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:07:06.807056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:07:06.809913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:10.267091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:14.527555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:18.126072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:21.180305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.202427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.208237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:24.208418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:07:24.208541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48704f4d-8b51-4c73-91f7-52bbe5715cf0", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5 became leader
	I1210 23:07:24.208635       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5!
	W1210 23:07:24.211310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.215592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:24.309047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5!
	W1210 23:07:26.219067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:26.223750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:28.227283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:28.232750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.236367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.240155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75] <==
	I1210 23:06:36.028152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:07:06.032109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468067 -n embed-certs-468067: exit status 2 (341.827653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-468067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-468067
helpers_test.go:244: (dbg) docker inspect embed-certs-468067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	        "Created": "2025-12-10T23:05:20.332136032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:26.235606299Z",
	            "FinishedAt": "2025-12-10T23:06:25.173722576Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8/4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8-json.log",
	        "Name": "/embed-certs-468067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-468067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-468067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b27d4853e796c2d72c44127297b41e3c769486d453c1f5efee90f80ec6560b8",
	                "LowerDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fcecc750d735943a89a3f547b5eddf6d1ef4026a239d6c32fa8279f924cd435e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-468067",
	                "Source": "/var/lib/docker/volumes/embed-certs-468067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-468067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-468067",
	                "name.minikube.sigs.k8s.io": "embed-certs-468067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "125aee1280051d16924c9f7c269c3a9df1264a40d93a51a696a6f3321fd932e3",
	            "SandboxKey": "/var/run/docker/netns/125aee128005",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-468067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62dd6bab6a632f3b3d47ad53284a920285184de444b92fe6a92c9c747bea6de0",
	                    "EndpointID": "6c970c051d33867cd4ad15e5aa499b385f59a7417a1042716b2639f1cf88af6e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "56:fc:78:31:24:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-468067",
	                        "4b27d4853e79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067: exit status 2 (337.688727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-468067 logs -n 25: (1.167837028s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-177285 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat docker --no-pager                                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/docker/daemon.json                                                                                        │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo docker system info                                                                                                 │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cri-dockerd --version                                                                                              │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat containerd --no-pager                                                                                │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/containerd/config.toml                                                                                    │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo containerd config dump                                                                                             │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl cat crio --no-pager                                                                                      │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo crio config                                                                                                        │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ delete  │ -p auto-177285                                                                                                                         │ auto-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-177285      │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ image   │ embed-certs-468067 image list --format=json                                                                                            │ embed-certs-468067 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p embed-certs-468067 --alsologtostderr -v=1                                                                                           │ embed-certs-468067 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:07:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:07:18.905885  314972 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:18.906156  314972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:18.906165  314972 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:18.906169  314972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:18.906379  314972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:18.906899  314972 out.go:368] Setting JSON to false
	I1210 23:07:18.908313  314972 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2981,"bootTime":1765405058,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:07:18.908405  314972 start.go:143] virtualization: kvm guest
	I1210 23:07:18.910523  314972 out.go:179] * [calico-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:07:18.912052  314972 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:07:18.912087  314972 notify.go:221] Checking for updates...
	I1210 23:07:18.915131  314972 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:07:18.916637  314972 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:18.918016  314972 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:07:18.919467  314972 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:07:18.920812  314972 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1210 23:07:14.710717  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	W1210 23:07:17.209215  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	I1210 23:07:18.922504  314972 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922591  314972 config.go:182] Loaded profile config "embed-certs-468067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922712  314972 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:18.922835  314972 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:07:18.950342  314972 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:07:18.950430  314972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:19.013070  314972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:19.000925488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:19.013223  314972 docker.go:319] overlay module found
	I1210 23:07:19.014904  314972 out.go:179] * Using the docker driver based on user configuration
	I1210 23:07:19.016154  314972 start.go:309] selected driver: docker
	I1210 23:07:19.016168  314972 start.go:927] validating driver "docker" against <nil>
	I1210 23:07:19.016180  314972 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:07:19.016745  314972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:19.089346  314972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:19.077247114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:19.089552  314972 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:07:19.089811  314972 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:07:19.091581  314972 out.go:179] * Using Docker driver with root privileges
	I1210 23:07:19.092956  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:19.092978  314972 start_flags.go:351] Found "Calico" CNI - setting NetworkPlugin=cni
	I1210 23:07:19.093071  314972 start.go:353] cluster config:
	{Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:19.094585  314972 out.go:179] * Starting "calico-177285" primary control-plane node in "calico-177285" cluster
	I1210 23:07:19.095891  314972 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:07:19.097206  314972 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:07:19.098418  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:19.098454  314972 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:07:19.098466  314972 cache.go:65] Caching tarball of preloaded images
	I1210 23:07:19.098517  314972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:07:19.098573  314972 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:07:19.098584  314972 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:07:19.098716  314972 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json ...
	I1210 23:07:19.098740  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json: {Name:mk6416601240975ffd879732783771c8d4925824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.120152  314972 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:07:19.120171  314972 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:07:19.120187  314972 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:07:19.120217  314972 start.go:360] acquireMachinesLock for calico-177285: {Name:mkec978e6c01edaf68f82fc8eab571694440f319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:07:19.120310  314972 start.go:364] duration metric: took 78.645µs to acquireMachinesLock for "calico-177285"
	I1210 23:07:19.120342  314972 start.go:93] Provisioning new machine with config: &{Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:19.120428  314972 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:07:15.501223  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:16.001031  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:16.501194  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:17.001489  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:17.500682  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:18.001430  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:18.500778  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.001381  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.501688  307880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:19.613009  307880 kubeadm.go:1114] duration metric: took 4.707585486s to wait for elevateKubeSystemPrivileges
	I1210 23:07:19.613051  307880 kubeadm.go:403] duration metric: took 15.17044723s to StartCluster
	I1210 23:07:19.613078  307880 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.613146  307880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:19.615707  307880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:19.615972  307880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:07:19.615984  307880 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:19.616072  307880 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:07:19.616173  307880 addons.go:70] Setting storage-provisioner=true in profile "kindnet-177285"
	I1210 23:07:19.616197  307880 addons.go:239] Setting addon storage-provisioner=true in "kindnet-177285"
	I1210 23:07:19.616189  307880 addons.go:70] Setting default-storageclass=true in profile "kindnet-177285"
	I1210 23:07:19.616226  307880 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-177285"
	I1210 23:07:19.616247  307880 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:19.616234  307880 host.go:66] Checking if "kindnet-177285" exists ...
	I1210 23:07:19.616723  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.617018  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.617939  307880 out.go:179] * Verifying Kubernetes components...
	I1210 23:07:19.620184  307880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:19.646838  307880 addons.go:239] Setting addon default-storageclass=true in "kindnet-177285"
	I1210 23:07:19.646890  307880 host.go:66] Checking if "kindnet-177285" exists ...
	I1210 23:07:19.647363  307880 cli_runner.go:164] Run: docker container inspect kindnet-177285 --format={{.State.Status}}
	I1210 23:07:19.647816  307880 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:07:19.649446  307880 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:07:19.649464  307880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:07:19.649517  307880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-177285
	I1210 23:07:19.678501  307880 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:07:19.678526  307880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:07:19.678604  307880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-177285
	I1210 23:07:19.683114  307880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/kindnet-177285/id_rsa Username:docker}
	I1210 23:07:19.708427  307880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/kindnet-177285/id_rsa Username:docker}
	I1210 23:07:19.728167  307880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 23:07:19.800773  307880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:07:19.809898  307880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:07:19.850302  307880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:07:19.939498  307880 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 23:07:19.942546  307880 node_ready.go:35] waiting up to 15m0s for node "kindnet-177285" to be "Ready" ...
	I1210 23:07:20.168027  307880 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 23:07:20.169414  307880 addons.go:530] duration metric: took 553.337304ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 23:07:19.122525  314972 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:07:19.122757  314972 start.go:159] libmachine.API.Create for "calico-177285" (driver="docker")
	I1210 23:07:19.122786  314972 client.go:173] LocalClient.Create starting
	I1210 23:07:19.122867  314972 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:07:19.122905  314972 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:19.122934  314972 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:19.122996  314972 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:07:19.123032  314972 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:19.123051  314972 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:19.123460  314972 cli_runner.go:164] Run: docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:07:19.141984  314972 cli_runner.go:211] docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:07:19.142104  314972 network_create.go:284] running [docker network inspect calico-177285] to gather additional debugging logs...
	I1210 23:07:19.142131  314972 cli_runner.go:164] Run: docker network inspect calico-177285
	W1210 23:07:19.161001  314972 cli_runner.go:211] docker network inspect calico-177285 returned with exit code 1
	I1210 23:07:19.161030  314972 network_create.go:287] error running [docker network inspect calico-177285]: docker network inspect calico-177285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-177285 not found
	I1210 23:07:19.161063  314972 network_create.go:289] output of [docker network inspect calico-177285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-177285 not found
	
	** /stderr **
	I1210 23:07:19.161225  314972 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:07:19.179738  314972 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:07:19.180568  314972 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:07:19.181273  314972 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:07:19.181826  314972 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8875699386e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:89:d4:9b:b9:bc} reservation:<nil>}
	I1210 23:07:19.182601  314972 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a5b1d987b87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ca:3e:51:dc:a7:74} reservation:<nil>}
	I1210 23:07:19.183515  314972 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e804c0}
	I1210 23:07:19.183541  314972 network_create.go:124] attempt to create docker network calico-177285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 23:07:19.183590  314972 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-177285 calico-177285
	I1210 23:07:19.234988  314972 network_create.go:108] docker network calico-177285 192.168.94.0/24 created
	I1210 23:07:19.235027  314972 kic.go:121] calculated static IP "192.168.94.2" for the "calico-177285" container
	I1210 23:07:19.235108  314972 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:07:19.253724  314972 cli_runner.go:164] Run: docker volume create calico-177285 --label name.minikube.sigs.k8s.io=calico-177285 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:07:19.277887  314972 oci.go:103] Successfully created a docker volume calico-177285
	I1210 23:07:19.277968  314972 cli_runner.go:164] Run: docker run --rm --name calico-177285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-177285 --entrypoint /usr/bin/test -v calico-177285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:07:19.770444  314972 oci.go:107] Successfully prepared a docker volume calico-177285
	I1210 23:07:19.770535  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:19.770547  314972 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:07:19.770634  314972 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:07:23.761451  314972 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.990725779s)
	I1210 23:07:23.761486  314972 kic.go:203] duration metric: took 3.990934724s to extract preloaded images to volume ...
	W1210 23:07:23.761581  314972 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 23:07:23.761622  314972 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 23:07:23.761714  314972 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:07:23.816883  314972 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-177285 --name calico-177285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-177285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-177285 --network calico-177285 --ip 192.168.94.2 --volume calico-177285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	W1210 23:07:19.210397  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	W1210 23:07:21.709867  300940 pod_ready.go:104] pod "coredns-66bc5c9577-s8zsm" is not "Ready", error: <nil>
	I1210 23:07:20.445017  307880 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-177285" context rescaled to 1 replicas
	W1210 23:07:21.951965  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	W1210 23:07:24.445815  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	I1210 23:07:24.211361  300940 pod_ready.go:94] pod "coredns-66bc5c9577-s8zsm" is "Ready"
	I1210 23:07:24.211392  300940 pod_ready.go:86] duration metric: took 38.506972442s for pod "coredns-66bc5c9577-s8zsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.215494  300940 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.219818  300940 pod_ready.go:94] pod "etcd-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.219838  300940 pod_ready.go:86] duration metric: took 4.315292ms for pod "etcd-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.222037  300940 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.226334  300940 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.226358  300940 pod_ready.go:86] duration metric: took 4.29735ms for pod "kube-apiserver-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.228343  300940 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.409624  300940 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:24.409689  300940 pod_ready.go:86] duration metric: took 181.32439ms for pod "kube-controller-manager-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:24.609008  300940 pod_ready.go:83] waiting for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.008105  300940 pod_ready.go:94] pod "kube-proxy-lwnhd" is "Ready"
	I1210 23:07:25.008129  300940 pod_ready.go:86] duration metric: took 399.096739ms for pod "kube-proxy-lwnhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.208554  300940 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.608564  300940 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-443884" is "Ready"
	I1210 23:07:25.608592  300940 pod_ready.go:86] duration metric: took 400.012915ms for pod "kube-scheduler-default-k8s-diff-port-443884" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:25.608607  300940 pod_ready.go:40] duration metric: took 39.909835455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:07:25.655067  300940 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:07:25.657006  300940 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-443884" cluster and "default" namespace by default
	I1210 23:07:24.105423  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Running}}
	I1210 23:07:24.124260  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.143178  314972 cli_runner.go:164] Run: docker exec calico-177285 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:07:24.197782  314972 oci.go:144] the created container "calico-177285" has a running status.
	I1210 23:07:24.197812  314972 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa...
	I1210 23:07:24.526853  314972 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:07:24.553179  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.571980  314972 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:07:24.572005  314972 kic_runner.go:114] Args: [docker exec --privileged calico-177285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:07:24.620357  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:24.639747  314972 machine.go:94] provisionDockerMachine start ...
	I1210 23:07:24.639863  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.657849  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.658208  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.658232  314972 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:07:24.793637  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-177285
	
	I1210 23:07:24.793698  314972 ubuntu.go:182] provisioning hostname "calico-177285"
	I1210 23:07:24.793772  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.812270  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.812504  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.812519  314972 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-177285 && echo "calico-177285" | sudo tee /etc/hostname
	I1210 23:07:24.957099  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-177285
	
	I1210 23:07:24.957195  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:24.977178  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:24.977455  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:24.977484  314972 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-177285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-177285/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-177285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:07:25.112347  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:07:25.112374  314972 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5100/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5100/.minikube}
	I1210 23:07:25.112428  314972 ubuntu.go:190] setting up certificates
	I1210 23:07:25.112445  314972 provision.go:84] configureAuth start
	I1210 23:07:25.112497  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:25.130465  314972 provision.go:143] copyHostCerts
	I1210 23:07:25.130539  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem, removing ...
	I1210 23:07:25.130553  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem
	I1210 23:07:25.130632  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/ca.pem (1078 bytes)
	I1210 23:07:25.130779  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem, removing ...
	I1210 23:07:25.130794  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem
	I1210 23:07:25.130836  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/cert.pem (1123 bytes)
	I1210 23:07:25.130983  314972 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem, removing ...
	I1210 23:07:25.130999  314972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem
	I1210 23:07:25.131036  314972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5100/.minikube/key.pem (1679 bytes)
	I1210 23:07:25.131111  314972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem org=jenkins.calico-177285 san=[127.0.0.1 192.168.94.2 calico-177285 localhost minikube]
	I1210 23:07:25.226095  314972 provision.go:177] copyRemoteCerts
	I1210 23:07:25.226156  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:07:25.226197  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.243879  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.340009  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:07:25.359908  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 23:07:25.378132  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:07:25.397146  314972 provision.go:87] duration metric: took 284.687738ms to configureAuth
	I1210 23:07:25.397174  314972 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:07:25.397363  314972 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:25.397476  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.416165  314972 main.go:143] libmachine: Using SSH client type: native
	I1210 23:07:25.416399  314972 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1210 23:07:25.416423  314972 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:07:25.695420  314972 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:07:25.695447  314972 machine.go:97] duration metric: took 1.055678209s to provisionDockerMachine
	I1210 23:07:25.695461  314972 client.go:176] duration metric: took 6.572668717s to LocalClient.Create
	I1210 23:07:25.695483  314972 start.go:167] duration metric: took 6.572724957s to libmachine.API.Create "calico-177285"
	I1210 23:07:25.695496  314972 start.go:293] postStartSetup for "calico-177285" (driver="docker")
	I1210 23:07:25.695509  314972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:07:25.695578  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:07:25.695626  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.717920  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.818661  314972 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:07:25.822152  314972 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:07:25.822176  314972 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:07:25.822188  314972 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/addons for local assets ...
	I1210 23:07:25.822234  314972 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5100/.minikube/files for local assets ...
	I1210 23:07:25.822306  314972 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem -> 86602.pem in /etc/ssl/certs
	I1210 23:07:25.822407  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:07:25.829937  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:07:25.850375  314972 start.go:296] duration metric: took 154.867293ms for postStartSetup
	I1210 23:07:25.850756  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:25.868605  314972 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/config.json ...
	I1210 23:07:25.868909  314972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:07:25.868960  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:25.888202  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:25.984533  314972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:07:25.989221  314972 start.go:128] duration metric: took 6.868779855s to createHost
	I1210 23:07:25.989246  314972 start.go:83] releasing machines lock for "calico-177285", held for 6.868924467s
	I1210 23:07:25.989316  314972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-177285
	I1210 23:07:26.008356  314972 ssh_runner.go:195] Run: cat /version.json
	I1210 23:07:26.008414  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:26.008439  314972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:07:26.008528  314972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-177285
	I1210 23:07:26.028505  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:26.029363  314972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/calico-177285/id_rsa Username:docker}
	I1210 23:07:26.176586  314972 ssh_runner.go:195] Run: systemctl --version
	I1210 23:07:26.183271  314972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:07:26.220365  314972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:07:26.225510  314972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:07:26.225578  314972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:07:26.252731  314972 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:07:26.252752  314972 start.go:496] detecting cgroup driver to use...
	I1210 23:07:26.252781  314972 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 23:07:26.252819  314972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:07:26.270171  314972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:07:26.282627  314972 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:07:26.282711  314972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:07:26.299694  314972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:07:26.317579  314972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:07:26.402291  314972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:07:26.489609  314972 docker.go:234] disabling docker service ...
	I1210 23:07:26.489700  314972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:07:26.508606  314972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:07:26.521601  314972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:07:26.611626  314972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:07:26.693181  314972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:07:26.705493  314972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:07:26.719599  314972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:07:26.719680  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.729960  314972 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 23:07:26.730018  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.738804  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.747571  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.756600  314972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:07:26.764695  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.773351  314972 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.786959  314972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:07:26.795783  314972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:07:26.803376  314972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:07:26.810984  314972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:26.889469  314972 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:07:27.017546  314972 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:07:27.017606  314972 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:07:27.021633  314972 start.go:564] Will wait 60s for crictl version
	I1210 23:07:27.021709  314972 ssh_runner.go:195] Run: which crictl
	I1210 23:07:27.025369  314972 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:07:27.049052  314972 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:07:27.049125  314972 ssh_runner.go:195] Run: crio --version
	I1210 23:07:27.077144  314972 ssh_runner.go:195] Run: crio --version
	I1210 23:07:27.109180  314972 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:07:27.110340  314972 cli_runner.go:164] Run: docker network inspect calico-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:07:27.129692  314972 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 23:07:27.133804  314972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:07:27.143771  314972 kubeadm.go:884] updating cluster {Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:07:27.143896  314972 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:27.143952  314972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:07:27.175927  314972 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:07:27.175949  314972 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:07:27.176007  314972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:07:27.200987  314972 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:07:27.201011  314972 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:07:27.201019  314972 kubeadm.go:935] updating node { 192.168.94.2  8443 v1.34.2 crio true true} ...
	I1210 23:07:27.201107  314972 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-177285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1210 23:07:27.201170  314972 ssh_runner.go:195] Run: crio config
	I1210 23:07:27.247062  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:27.247095  314972 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:07:27.247117  314972 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-177285 NodeName:calico-177285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:07:27.247230  314972 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-177285"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:07:27.247291  314972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:07:27.255292  314972 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:07:27.255355  314972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:07:27.263302  314972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 23:07:27.276009  314972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:07:27.291693  314972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 23:07:27.303980  314972 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:07:27.307639  314972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:07:27.317515  314972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:27.398511  314972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:07:27.422938  314972 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285 for IP: 192.168.94.2
	I1210 23:07:27.422961  314972 certs.go:195] generating shared ca certs ...
	I1210 23:07:27.422980  314972 certs.go:227] acquiring lock for ca certs: {Name:mkaaa741c45fb3c539c26cacc48a1e4244203555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.423155  314972 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key
	I1210 23:07:27.423211  314972 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key
	I1210 23:07:27.423227  314972 certs.go:257] generating profile certs ...
	I1210 23:07:27.423301  314972 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key
	I1210 23:07:27.423318  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt with IP's: []
	I1210 23:07:27.507838  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt ...
	I1210 23:07:27.507865  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.crt: {Name:mk54aaa2857e73ea1f27c44ac0ee422854265672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.508045  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key ...
	I1210 23:07:27.508056  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/client.key: {Name:mkf188842d56ae0fbb344d028c40f8bf650ed9a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.508145  314972 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4
	I1210 23:07:27.508160  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1210 23:07:27.593780  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 ...
	I1210 23:07:27.593806  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4: {Name:mk7471040ab55b0a380d85ed8204b36de60e471a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.593965  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4 ...
	I1210 23:07:27.593978  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4: {Name:mkfd1461d452c5b3a5b938ad0a32ca6e345651a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.594057  314972 certs.go:382] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt.f4a33ca4 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt
	I1210 23:07:27.594133  314972 certs.go:386] copying /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key.f4a33ca4 -> /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key
	I1210 23:07:27.594193  314972 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key
	I1210 23:07:27.594212  314972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt with IP's: []
	I1210 23:07:27.762794  314972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt ...
	I1210 23:07:27.762821  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt: {Name:mk79496a48285a75568f813fed65772cc8edcfa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.762989  314972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key ...
	I1210 23:07:27.763000  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key: {Name:mk8606d0ba3e44ad80c3fb12fa05a6474606f962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:27.763180  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem (1338 bytes)
	W1210 23:07:27.763222  314972 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660_empty.pem, impossibly tiny 0 bytes
	I1210 23:07:27.763232  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 23:07:27.763259  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:07:27.763285  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:07:27.763308  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/certs/key.pem (1679 bytes)
	I1210 23:07:27.763356  314972 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem (1708 bytes)
	I1210 23:07:27.763966  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:07:27.782817  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 23:07:27.800488  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:07:27.818773  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:07:27.836456  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:07:27.854019  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 23:07:27.871804  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:07:27.889729  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/calico-177285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 23:07:27.908382  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/certs/8660.pem --> /usr/share/ca-certificates/8660.pem (1338 bytes)
	I1210 23:07:27.928441  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/ssl/certs/86602.pem --> /usr/share/ca-certificates/86602.pem (1708 bytes)
	I1210 23:07:27.947361  314972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:07:27.968371  314972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:07:27.981847  314972 ssh_runner.go:195] Run: openssl version
	I1210 23:07:27.988326  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8660.pem
	I1210 23:07:27.996996  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8660.pem /etc/ssl/certs/8660.pem
	I1210 23:07:28.005612  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.009970  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:34 /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.010031  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8660.pem
	I1210 23:07:28.047914  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:07:28.056065  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8660.pem /etc/ssl/certs/51391683.0
	I1210 23:07:28.064673  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.073591  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/86602.pem /etc/ssl/certs/86602.pem
	I1210 23:07:28.081390  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.085164  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:34 /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.085222  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86602.pem
	I1210 23:07:28.125016  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:07:28.133427  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/86602.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:07:28.141124  314972 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.149181  314972 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:07:28.157136  314972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.161135  314972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.161186  314972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:07:28.204410  314972 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:07:28.213763  314972 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:07:28.221741  314972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:07:28.225662  314972 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:07:28.225730  314972 kubeadm.go:401] StartCluster: {Name:calico-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-177285 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:28.225821  314972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:07:28.225874  314972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:07:28.256915  314972 cri.go:89] found id: ""
	I1210 23:07:28.256984  314972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:07:28.265459  314972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:07:28.273860  314972 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:07:28.273915  314972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:07:28.282956  314972 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:07:28.282976  314972 kubeadm.go:158] found existing configuration files:
	
	I1210 23:07:28.283022  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:07:28.292033  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:07:28.292090  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:07:28.300223  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:07:28.308995  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:07:28.309052  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:07:28.316744  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:07:28.324607  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:07:28.324695  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:07:28.333238  314972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:07:28.341573  314972 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:07:28.341631  314972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:07:28.350531  314972 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:07:28.420237  314972 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1210 23:07:28.484893  314972 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 23:07:26.446732  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	W1210 23:07:28.446778  307880 node_ready.go:57] node "kindnet-177285" has "Ready":"False" status (will retry)
	I1210 23:07:30.446549  307880 node_ready.go:49] node "kindnet-177285" is "Ready"
	I1210 23:07:30.446582  307880 node_ready.go:38] duration metric: took 10.503997819s for node "kindnet-177285" to be "Ready" ...
	I1210 23:07:30.446599  307880 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:07:30.446713  307880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:07:30.460243  307880 api_server.go:72] duration metric: took 10.844220312s to wait for apiserver process to appear ...
	I1210 23:07:30.460268  307880 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:07:30.460290  307880 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 23:07:30.466069  307880 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 23:07:30.467267  307880 api_server.go:141] control plane version: v1.34.2
	I1210 23:07:30.467296  307880 api_server.go:131] duration metric: took 7.020594ms to wait for apiserver health ...
	I1210 23:07:30.467307  307880 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:07:30.470825  307880 system_pods.go:59] 8 kube-system pods found
	I1210 23:07:30.470856  307880 system_pods.go:61] "coredns-66bc5c9577-m9xlh" [49e9af5f-74e9-4901-b6df-e4b6ba053571] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:07:30.470864  307880 system_pods.go:61] "etcd-kindnet-177285" [12e36c84-3f5f-43f8-9dc7-e06f7760618b] Running
	I1210 23:07:30.470872  307880 system_pods.go:61] "kindnet-kb67v" [9b500ae7-b836-408b-a181-9b7813d2720e] Running
	I1210 23:07:30.470877  307880 system_pods.go:61] "kube-apiserver-kindnet-177285" [65a39fbf-15f7-4a9b-9914-905a2fb4ac03] Running
	I1210 23:07:30.470882  307880 system_pods.go:61] "kube-controller-manager-kindnet-177285" [f6db618b-6bfa-46ee-92ff-899a797f3287] Running
	I1210 23:07:30.470888  307880 system_pods.go:61] "kube-proxy-gbt27" [d0b25815-91a0-4944-9211-309ce89a3808] Running
	I1210 23:07:30.470893  307880 system_pods.go:61] "kube-scheduler-kindnet-177285" [81c3d7a3-fa16-4e89-8310-68d8a66e439c] Running
	I1210 23:07:30.470900  307880 system_pods.go:61] "storage-provisioner" [d6696bf8-2a91-419d-8652-c9439a510589] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:07:30.470916  307880 system_pods.go:74] duration metric: took 3.601632ms to wait for pod list to return data ...
	I1210 23:07:30.470929  307880 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:07:30.473360  307880 default_sa.go:45] found service account: "default"
	I1210 23:07:30.473378  307880 default_sa.go:55] duration metric: took 2.443409ms for default service account to be created ...
	I1210 23:07:30.473386  307880 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:07:30.476457  307880 system_pods.go:86] 8 kube-system pods found
	I1210 23:07:30.476490  307880 system_pods.go:89] "coredns-66bc5c9577-m9xlh" [49e9af5f-74e9-4901-b6df-e4b6ba053571] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:07:30.476508  307880 system_pods.go:89] "etcd-kindnet-177285" [12e36c84-3f5f-43f8-9dc7-e06f7760618b] Running
	I1210 23:07:30.476521  307880 system_pods.go:89] "kindnet-kb67v" [9b500ae7-b836-408b-a181-9b7813d2720e] Running
	I1210 23:07:30.476527  307880 system_pods.go:89] "kube-apiserver-kindnet-177285" [65a39fbf-15f7-4a9b-9914-905a2fb4ac03] Running
	I1210 23:07:30.476535  307880 system_pods.go:89] "kube-controller-manager-kindnet-177285" [f6db618b-6bfa-46ee-92ff-899a797f3287] Running
	I1210 23:07:30.476541  307880 system_pods.go:89] "kube-proxy-gbt27" [d0b25815-91a0-4944-9211-309ce89a3808] Running
	I1210 23:07:30.476548  307880 system_pods.go:89] "kube-scheduler-kindnet-177285" [81c3d7a3-fa16-4e89-8310-68d8a66e439c] Running
	I1210 23:07:30.476555  307880 system_pods.go:89] "storage-provisioner" [d6696bf8-2a91-419d-8652-c9439a510589] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:07:30.476583  307880 retry.go:31] will retry after 294.319708ms: missing components: kube-dns
	I1210 23:07:30.777827  307880 system_pods.go:86] 8 kube-system pods found
	I1210 23:07:30.777865  307880 system_pods.go:89] "coredns-66bc5c9577-m9xlh" [49e9af5f-74e9-4901-b6df-e4b6ba053571] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:07:30.777899  307880 system_pods.go:89] "etcd-kindnet-177285" [12e36c84-3f5f-43f8-9dc7-e06f7760618b] Running
	I1210 23:07:30.777908  307880 system_pods.go:89] "kindnet-kb67v" [9b500ae7-b836-408b-a181-9b7813d2720e] Running
	I1210 23:07:30.777914  307880 system_pods.go:89] "kube-apiserver-kindnet-177285" [65a39fbf-15f7-4a9b-9914-905a2fb4ac03] Running
	I1210 23:07:30.777921  307880 system_pods.go:89] "kube-controller-manager-kindnet-177285" [f6db618b-6bfa-46ee-92ff-899a797f3287] Running
	I1210 23:07:30.777927  307880 system_pods.go:89] "kube-proxy-gbt27" [d0b25815-91a0-4944-9211-309ce89a3808] Running
	I1210 23:07:30.777933  307880 system_pods.go:89] "kube-scheduler-kindnet-177285" [81c3d7a3-fa16-4e89-8310-68d8a66e439c] Running
	I1210 23:07:30.777941  307880 system_pods.go:89] "storage-provisioner" [d6696bf8-2a91-419d-8652-c9439a510589] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:07:30.777980  307880 retry.go:31] will retry after 374.308071ms: missing components: kube-dns
	I1210 23:07:31.157562  307880 system_pods.go:86] 8 kube-system pods found
	I1210 23:07:31.157593  307880 system_pods.go:89] "coredns-66bc5c9577-m9xlh" [49e9af5f-74e9-4901-b6df-e4b6ba053571] Running
	I1210 23:07:31.157601  307880 system_pods.go:89] "etcd-kindnet-177285" [12e36c84-3f5f-43f8-9dc7-e06f7760618b] Running
	I1210 23:07:31.157607  307880 system_pods.go:89] "kindnet-kb67v" [9b500ae7-b836-408b-a181-9b7813d2720e] Running
	I1210 23:07:31.157611  307880 system_pods.go:89] "kube-apiserver-kindnet-177285" [65a39fbf-15f7-4a9b-9914-905a2fb4ac03] Running
	I1210 23:07:31.157616  307880 system_pods.go:89] "kube-controller-manager-kindnet-177285" [f6db618b-6bfa-46ee-92ff-899a797f3287] Running
	I1210 23:07:31.157627  307880 system_pods.go:89] "kube-proxy-gbt27" [d0b25815-91a0-4944-9211-309ce89a3808] Running
	I1210 23:07:31.157632  307880 system_pods.go:89] "kube-scheduler-kindnet-177285" [81c3d7a3-fa16-4e89-8310-68d8a66e439c] Running
	I1210 23:07:31.157664  307880 system_pods.go:89] "storage-provisioner" [d6696bf8-2a91-419d-8652-c9439a510589] Running
	I1210 23:07:31.157678  307880 system_pods.go:126] duration metric: took 684.285044ms to wait for k8s-apps to be running ...
	I1210 23:07:31.157691  307880 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:07:31.157753  307880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:31.174114  307880 system_svc.go:56] duration metric: took 16.413012ms WaitForService to wait for kubelet
	I1210 23:07:31.174145  307880 kubeadm.go:587] duration metric: took 11.558128991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:07:31.174167  307880 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:07:31.177431  307880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 23:07:31.177462  307880 node_conditions.go:123] node cpu capacity is 8
	I1210 23:07:31.177481  307880 node_conditions.go:105] duration metric: took 3.308088ms to run NodePressure ...
	I1210 23:07:31.177496  307880 start.go:242] waiting for startup goroutines ...
	I1210 23:07:31.177506  307880 start.go:247] waiting for cluster config update ...
	I1210 23:07:31.177520  307880 start.go:256] writing updated cluster config ...
	I1210 23:07:31.177854  307880 ssh_runner.go:195] Run: rm -f paused
	I1210 23:07:31.182067  307880 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:07:31.186502  307880 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m9xlh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.191277  307880 pod_ready.go:94] pod "coredns-66bc5c9577-m9xlh" is "Ready"
	I1210 23:07:31.191303  307880 pod_ready.go:86] duration metric: took 4.779278ms for pod "coredns-66bc5c9577-m9xlh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.194108  307880 pod_ready.go:83] waiting for pod "etcd-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.199399  307880 pod_ready.go:94] pod "etcd-kindnet-177285" is "Ready"
	I1210 23:07:31.199423  307880 pod_ready.go:86] duration metric: took 5.291583ms for pod "etcd-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.201711  307880 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.207694  307880 pod_ready.go:94] pod "kube-apiserver-kindnet-177285" is "Ready"
	I1210 23:07:31.207773  307880 pod_ready.go:86] duration metric: took 6.03963ms for pod "kube-apiserver-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.212726  307880 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.587779  307880 pod_ready.go:94] pod "kube-controller-manager-kindnet-177285" is "Ready"
	I1210 23:07:31.587805  307880 pod_ready.go:86] duration metric: took 375.052604ms for pod "kube-controller-manager-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:31.786815  307880 pod_ready.go:83] waiting for pod "kube-proxy-gbt27" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:32.187422  307880 pod_ready.go:94] pod "kube-proxy-gbt27" is "Ready"
	I1210 23:07:32.187450  307880 pod_ready.go:86] duration metric: took 400.605807ms for pod "kube-proxy-gbt27" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:32.389040  307880 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:32.787883  307880 pod_ready.go:94] pod "kube-scheduler-kindnet-177285" is "Ready"
	I1210 23:07:32.787908  307880 pod_ready.go:86] duration metric: took 398.835652ms for pod "kube-scheduler-kindnet-177285" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:07:32.787922  307880 pod_ready.go:40] duration metric: took 1.605826559s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:07:32.839436  307880 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:07:32.846059  307880 out.go:179] * Done! kubectl is now configured to use "kindnet-177285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753505433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753660385Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/00e7c71d69a49d9b98c62b7a90d00ebc6efa1f41178aec0080b472ec20f4f410/merged/etc/passwd: no such file or directory"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753688158Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/00e7c71d69a49d9b98c62b7a90d00ebc6efa1f41178aec0080b472ec20f4f410/merged/etc/group: no such file or directory"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.753972677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.781758955Z" level=info msg="Created container 45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140: kube-system/storage-provisioner/storage-provisioner" id=fb380cca-7177-4b0d-8e50-ad26c4bee50d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.782392156Z" level=info msg="Starting container: 45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140" id=d480947f-4783-49ec-abbc-837379251c00 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:06 embed-certs-468067 crio[558]: time="2025-12-10T23:07:06.784672605Z" level=info msg="Started container" PID=1711 containerID=45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140 description=kube-system/storage-provisioner/storage-provisioner id=d480947f-4783-49ec-abbc-837379251c00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=04175651feb158c698907f0f23ae069739ee6f65659b3e8639f896973fe2cfaf
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.437029075Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441209703Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441241429Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.441268407Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444895628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444925624Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.444943813Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.448491574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.44851379Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.448535719Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452051091Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452074675Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.452095009Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455454689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455477601Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.455497549Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.459111538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 23:07:16 embed-certs-468067 crio[558]: time="2025-12-10T23:07:16.459137569Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	45fef78fec697       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   04175651feb15       storage-provisioner                          kube-system
	ca3188ebca191       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   a3d430ea85943       dashboard-metrics-scraper-6ffb444bf9-tqmd5   kubernetes-dashboard
	c437e7c17bd73       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   7931270c7f412       kubernetes-dashboard-855c9754f9-4l5m7        kubernetes-dashboard
	3511a11b6bb3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   c658845b30623       coredns-66bc5c9577-qw48c                     kube-system
	0d182ea0c7d43       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   d71c0ae2f17bd       busybox                                      default
	9565be37ba4bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   04175651feb15       storage-provisioner                          kube-system
	db93da31acfff       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           57 seconds ago       Running             kube-proxy                  0                   9b8200f5a08d2       kube-proxy-27pft                             kube-system
	a043df7068ef6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   f16fe216e294c       kindnet-dkdlj                                kube-system
	7106cbbed2e17       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   00f20f9bfd3a5       kube-apiserver-embed-certs-468067            kube-system
	7a770e31c3cb5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   1112cd9bb8721       kube-controller-manager-embed-certs-468067   kube-system
	01e91a1d6729c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   ef9dac8446d39       etcd-embed-certs-468067                      kube-system
	4e26510798550       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   42fbba0572928       kube-scheduler-embed-certs-468067            kube-system
	
	
	==> coredns [3511a11b6bb3ef6f21c769d491ba25968bb0aaeb52b92310391a70c59c50bcce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49304 - 47449 "HINFO IN 991423004734752759.2042785437316989899. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.028641651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-468067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-468067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=embed-certs-468067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-468067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:07:05 +0000   Wed, 10 Dec 2025 23:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-468067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                d2cd28f2-4471-41b6-a37d-4eadfd61fbb3
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-qw48c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-468067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-dkdlj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-468067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-468067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-27pft                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-468067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tqmd5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4l5m7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-468067 event: Registered Node embed-certs-468067 in Controller
	  Normal  NodeReady                98s                  kubelet          Node embed-certs-468067 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node embed-certs-468067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node embed-certs-468067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node embed-certs-468067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node embed-certs-468067 event: Registered Node embed-certs-468067 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [01e91a1d6729c0f408be75ad6d31df3a99ec66513c7a064523330f0bdbf2b192] <==
	{"level":"warn","ts":"2025-12-10T23:06:34.094598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.103908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.114699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.124127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.133400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.141879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.150673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.169902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.187056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.200797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.213234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.225395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.235168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.243245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.251741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.261292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.269932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.279545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.292361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.300866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.309977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.331881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.348010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.348368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:34.414551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:07:33 up 49 min,  0 user,  load average: 7.49, 4.39, 2.49
	Linux embed-certs-468067 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a043df7068ef659113d325d365985d88644c985a3de76a00be5ef60feb663dc8] <==
	I1210 23:06:36.235019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:36.235282       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 23:06:36.235429       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:36.235449       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:36.235476       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:36.436199       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:36.436363       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:36.436441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:36.436949       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 23:07:06.437390       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 23:07:06.437408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 23:07:06.437394       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 23:07:06.437395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1210 23:07:07.936731       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:07:07.936759       1 metrics.go:72] Registering metrics
	I1210 23:07:07.936823       1 controller.go:711] "Syncing nftables rules"
	I1210 23:07:16.436710       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:07:16.436768       1 main.go:301] handling current node
	I1210 23:07:26.444874       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 23:07:26.444915       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7106cbbed2e1740155de640dba2e41c219c20558eca67ddb29ccb4cf9dee15e8] <==
	I1210 23:06:35.030354       1 aggregator.go:171] initial CRD sync complete...
	I1210 23:06:35.030399       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:06:35.030424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:06:35.030478       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:06:35.032171       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:06:35.026858       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:06:35.045480       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:06:35.053265       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 23:06:35.053310       1 policy_source.go:240] refreshing policies
	I1210 23:06:35.070225       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:35.094476       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:06:35.094527       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:06:35.325818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:35.352922       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:35.370874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:35.379684       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:35.389858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:35.428679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.120.149"}
	I1210 23:06:35.441229       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.42.102"}
	I1210 23:06:35.896770       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:06:38.548918       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:38.796835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:38.796835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:38.945000       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:06:38.945000       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7a770e31c3cb5dd673f9eb4d8362019b70ef3b1f55e73857b7aa5eb2dc9edd45] <==
	I1210 23:06:38.478989       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 23:06:38.480180       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 23:06:38.482400       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 23:06:38.483697       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:06:38.485959       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 23:06:38.487477       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:06:38.489716       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:06:38.492131       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 23:06:38.492298       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 23:06:38.492484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 23:06:38.492462       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 23:06:38.492522       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 23:06:38.492575       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 23:06:38.492686       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-468067"
	I1210 23:06:38.492850       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 23:06:38.492885       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 23:06:38.492999       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:06:38.493112       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:06:38.493430       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:06:38.494003       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 23:06:38.495157       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:06:38.499680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:38.509861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:06:38.512998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:38.516139       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [db93da31acfffbb2a5392569333b7c3d46b434fbda9f06f848008784060f68a0] <==
	I1210 23:06:36.072210       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:36.139564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:06:36.240137       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:06:36.240170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 23:06:36.240264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:36.261042       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:36.261114       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:06:36.266260       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:36.266783       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:06:36.266816       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:36.268373       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:36.268450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:36.268450       1 config.go:200] "Starting service config controller"
	I1210 23:06:36.268471       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:36.268621       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:36.268842       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:36.268716       1 config.go:309] "Starting node config controller"
	I1210 23:06:36.268861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:36.268869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:36.368577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:36.369733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:06:36.369840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4e26510798550249d8f464c1a3f181c49a0bfaeef43add54ea3a9c1c1a9c090b] <==
	I1210 23:06:33.761138       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:34.906908       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:34.906949       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:34.906960       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:34.906969       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:34.988379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 23:06:34.989458       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:34.993481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:34.993523       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:34.994070       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:34.994150       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 23:06:35.008640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found, role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 23:06:35.018213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:06:35.018287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:06:35.018578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:06:35.018600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:06:35.019397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:06:35.019724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:06:35.020051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:06:35.020322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1210 23:06:36.493707       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:06:38 embed-certs-468067 kubelet[723]: I1210 23:06:38.994166     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj75p\" (UniqueName: \"kubernetes.io/projected/cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03-kube-api-access-fj75p\") pod \"dashboard-metrics-scraper-6ffb444bf9-tqmd5\" (UID: \"cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5"
	Dec 10 23:06:38 embed-certs-468067 kubelet[723]: I1210 23:06:38.994193     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c28g\" (UniqueName: \"kubernetes.io/projected/ceb10413-18a8-45d1-9707-8a032353a846-kube-api-access-4c28g\") pod \"kubernetes-dashboard-855c9754f9-4l5m7\" (UID: \"ceb10413-18a8-45d1-9707-8a032353a846\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l5m7"
	Dec 10 23:06:44 embed-certs-468067 kubelet[723]: I1210 23:06:44.751846     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 23:06:44 embed-certs-468067 kubelet[723]: I1210 23:06:44.973494     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4l5m7" podStartSLOduration=2.680518683 podStartE2EDuration="6.973473458s" podCreationTimestamp="2025-12-10 23:06:38 +0000 UTC" firstStartedPulling="2025-12-10 23:06:39.248354102 +0000 UTC m=+6.750442308" lastFinishedPulling="2025-12-10 23:06:43.541308867 +0000 UTC m=+11.043397083" observedRunningTime="2025-12-10 23:06:43.683409169 +0000 UTC m=+11.185497383" watchObservedRunningTime="2025-12-10 23:06:44.973473458 +0000 UTC m=+12.475561671"
	Dec 10 23:06:46 embed-certs-468067 kubelet[723]: I1210 23:06:46.675404     723 scope.go:117] "RemoveContainer" containerID="2a11f7edecef74848b8399b55dd957e8c8a08627bf00f019d0cd523dfa713785"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: I1210 23:06:47.688764     723 scope.go:117] "RemoveContainer" containerID="2a11f7edecef74848b8399b55dd957e8c8a08627bf00f019d0cd523dfa713785"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: I1210 23:06:47.689707     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:47 embed-certs-468067 kubelet[723]: E1210 23:06:47.689882     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:06:48 embed-certs-468067 kubelet[723]: I1210 23:06:48.692592     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:48 embed-certs-468067 kubelet[723]: E1210 23:06:48.692851     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:06:49 embed-certs-468067 kubelet[723]: I1210 23:06:49.695002     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:06:49 embed-certs-468067 kubelet[723]: E1210 23:06:49.695212     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.602796     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.726289     723 scope.go:117] "RemoveContainer" containerID="eb8a08a0eccd728c5b2652498a0da50e7db62078ceb06f22d49fe8e6e5b9377f"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: I1210 23:07:00.726488     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:00 embed-certs-468067 kubelet[723]: E1210 23:07:00.726705     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:06 embed-certs-468067 kubelet[723]: I1210 23:07:06.746182     723 scope.go:117] "RemoveContainer" containerID="9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75"
	Dec 10 23:07:08 embed-certs-468067 kubelet[723]: I1210 23:07:08.655312     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:08 embed-certs-468067 kubelet[723]: E1210 23:07:08.655584     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:20 embed-certs-468067 kubelet[723]: I1210 23:07:20.602612     723 scope.go:117] "RemoveContainer" containerID="ca3188ebca19188ad25926e96e96c8ebf2ad239edf6de5a9bb7203da5c6e2816"
	Dec 10 23:07:20 embed-certs-468067 kubelet[723]: E1210 23:07:20.602858     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tqmd5_kubernetes-dashboard(cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tqmd5" podUID="cefb2cd5-a0d1-4ca6-987e-2e08d87d3c03"
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:07:28 embed-certs-468067 systemd[1]: kubelet.service: Consumed 1.804s CPU time.
	
	
	==> kubernetes-dashboard [c437e7c17bd73cb590736ec702bed4f2ba46902dcc3f5b1b262b60113ca64d0e] <==
	2025/12/10 23:06:43 Using namespace: kubernetes-dashboard
	2025/12/10 23:06:43 Using in-cluster config to connect to apiserver
	2025/12/10 23:06:43 Using secret token for csrf signing
	2025/12/10 23:06:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:06:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:06:43 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 23:06:43 Generating JWE encryption key
	2025/12/10 23:06:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:06:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:06:43 Initializing JWE encryption key from synchronized object
	2025/12/10 23:06:43 Creating in-cluster Sidecar client
	2025/12/10 23:06:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:43 Serving insecurely on HTTP port: 9090
	2025/12/10 23:07:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:43 Starting overwatch
	
	
	==> storage-provisioner [45fef78fec697ac8f280299bf413061d68d604449998dc417fb79d2a2c80b140] <==
	I1210 23:07:06.797872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:07:06.806991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:07:06.807056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:07:06.809913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:10.267091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:14.527555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:18.126072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:21.180305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.202427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.208237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:24.208418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:07:24.208541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48704f4d-8b51-4c73-91f7-52bbe5715cf0", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5 became leader
	I1210 23:07:24.208635       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5!
	W1210 23:07:24.211310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.215592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:24.309047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-468067_9788c96a-4f4e-4856-8bee-f4f8aa06aab5!
	W1210 23:07:26.219067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:26.223750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:28.227283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:28.232750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.236367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.240155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:32.243900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:32.248570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9565be37ba4bcc90b330ba76bab9605ee89a82a17944a624bb12b6aa6d0f6d75] <==
	I1210 23:06:36.028152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:07:06.032109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468067 -n embed-certs-468067
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468067 -n embed-certs-468067: exit status 2 (363.362979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-468067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-443884 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-443884 --alsologtostderr -v=1: exit status 80 (2.197170916s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-443884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:07:37.436157  319836 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:37.436486  319836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.436499  319836 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:37.436505  319836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.436865  319836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:37.437220  319836 out.go:368] Setting JSON to false
	I1210 23:07:37.437245  319836 mustload.go:66] Loading cluster: default-k8s-diff-port-443884
	I1210 23:07:37.437785  319836 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.438351  319836 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-443884 --format={{.State.Status}}
	I1210 23:07:37.461460  319836 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:07:37.461765  319836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:37.530818  319836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-10 23:07:37.51899331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:37.531616  319836 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-443884 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(boo
l=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 23:07:37.533488  319836 out.go:179] * Pausing node default-k8s-diff-port-443884 ... 
	I1210 23:07:37.535301  319836 host.go:66] Checking if "default-k8s-diff-port-443884" exists ...
	I1210 23:07:37.535637  319836 ssh_runner.go:195] Run: systemctl --version
	I1210 23:07:37.535702  319836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-443884
	I1210 23:07:37.557558  319836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/default-k8s-diff-port-443884/id_rsa Username:docker}
	I1210 23:07:37.658272  319836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:37.689522  319836 pause.go:52] kubelet running: true
	I1210 23:07:37.689582  319836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:37.885549  319836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:37.885632  319836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:37.973801  319836 cri.go:89] found id: "03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd"
	I1210 23:07:37.973830  319836 cri.go:89] found id: "7044fc6a78df205247cf8e9b36174a84891d5209da8269abec63b4e1e9a01dce"
	I1210 23:07:37.973836  319836 cri.go:89] found id: "c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910"
	I1210 23:07:37.973840  319836 cri.go:89] found id: "ac149f419c2611effc42416739c5200b8fc3d7699559c20bd6f60b50894ab601"
	I1210 23:07:37.973845  319836 cri.go:89] found id: "d8958d68c8e773b2cb94da3cc6d13f3cf27a5a8ecb168fac8decd50a0af55dfc"
	I1210 23:07:37.973862  319836 cri.go:89] found id: "26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1"
	I1210 23:07:37.973867  319836 cri.go:89] found id: "42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865"
	I1210 23:07:37.973871  319836 cri.go:89] found id: "ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2"
	I1210 23:07:37.973876  319836 cri.go:89] found id: "2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f"
	I1210 23:07:37.973894  319836 cri.go:89] found id: "5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	I1210 23:07:37.973898  319836 cri.go:89] found id: "57cd064b10a71dd8a8609addc81b713e938d596421399354feaee45d87ab2b89"
	I1210 23:07:37.973902  319836 cri.go:89] found id: ""
	I1210 23:07:37.973947  319836 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:37.991118  319836 retry.go:31] will retry after 362.24734ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:07:38.353578  319836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:38.367737  319836 pause.go:52] kubelet running: false
	I1210 23:07:38.367817  319836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:38.593826  319836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:38.593918  319836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:38.691417  319836 cri.go:89] found id: "03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd"
	I1210 23:07:38.691444  319836 cri.go:89] found id: "7044fc6a78df205247cf8e9b36174a84891d5209da8269abec63b4e1e9a01dce"
	I1210 23:07:38.691449  319836 cri.go:89] found id: "c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910"
	I1210 23:07:38.691454  319836 cri.go:89] found id: "ac149f419c2611effc42416739c5200b8fc3d7699559c20bd6f60b50894ab601"
	I1210 23:07:38.691458  319836 cri.go:89] found id: "d8958d68c8e773b2cb94da3cc6d13f3cf27a5a8ecb168fac8decd50a0af55dfc"
	I1210 23:07:38.691463  319836 cri.go:89] found id: "26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1"
	I1210 23:07:38.691467  319836 cri.go:89] found id: "42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865"
	I1210 23:07:38.691471  319836 cri.go:89] found id: "ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2"
	I1210 23:07:38.691475  319836 cri.go:89] found id: "2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f"
	I1210 23:07:38.691491  319836 cri.go:89] found id: "5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	I1210 23:07:38.691496  319836 cri.go:89] found id: "57cd064b10a71dd8a8609addc81b713e938d596421399354feaee45d87ab2b89"
	I1210 23:07:38.691500  319836 cri.go:89] found id: ""
	I1210 23:07:38.692044  319836 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:38.711254  319836 retry.go:31] will retry after 510.014943ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:38Z" level=error msg="open /run/runc: no such file or directory"
	I1210 23:07:39.221769  319836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:07:39.240430  319836 pause.go:52] kubelet running: false
	I1210 23:07:39.240504  319836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 23:07:39.435806  319836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 23:07:39.435892  319836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 23:07:39.526220  319836 cri.go:89] found id: "03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd"
	I1210 23:07:39.526248  319836 cri.go:89] found id: "7044fc6a78df205247cf8e9b36174a84891d5209da8269abec63b4e1e9a01dce"
	I1210 23:07:39.526255  319836 cri.go:89] found id: "c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910"
	I1210 23:07:39.526260  319836 cri.go:89] found id: "ac149f419c2611effc42416739c5200b8fc3d7699559c20bd6f60b50894ab601"
	I1210 23:07:39.526265  319836 cri.go:89] found id: "d8958d68c8e773b2cb94da3cc6d13f3cf27a5a8ecb168fac8decd50a0af55dfc"
	I1210 23:07:39.526270  319836 cri.go:89] found id: "26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1"
	I1210 23:07:39.526274  319836 cri.go:89] found id: "42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865"
	I1210 23:07:39.526279  319836 cri.go:89] found id: "ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2"
	I1210 23:07:39.526283  319836 cri.go:89] found id: "2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f"
	I1210 23:07:39.526291  319836 cri.go:89] found id: "5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	I1210 23:07:39.526300  319836 cri.go:89] found id: "57cd064b10a71dd8a8609addc81b713e938d596421399354feaee45d87ab2b89"
	I1210 23:07:39.526303  319836 cri.go:89] found id: ""
	I1210 23:07:39.526343  319836 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:07:39.543472  319836 out.go:203] 
	W1210 23:07:39.547849  319836 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:07:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:07:39.547876  319836 out.go:285] * 
	* 
	W1210 23:07:39.554463  319836 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:07:39.556063  319836 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-443884 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-443884
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-443884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	        "Created": "2025-12-10T23:05:26.959123143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:34.325138246Z",
	            "FinishedAt": "2025-12-10T23:06:33.197843504Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hosts",
	        "LogPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b-json.log",
	        "Name": "/default-k8s-diff-port-443884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-443884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-443884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	                "LowerDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-443884",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-443884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-443884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "375b66f2a05ac3c87d3c82dd52ed59b5f004f75bb8c3dca84798cf0d3236e69f",
	            "SandboxKey": "/var/run/docker/netns/375b66f2a05a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-443884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8875699386e55b3c7ba6f71ae6cb594bed837dd60b39b87d708bd26d3360a926",
	                    "EndpointID": "041dee59b71dd88f49caedfdc95cfd31e899eb2839f139109feefb260be1a67a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5e:89:a7:4f:20:aa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-443884",
	                        "a8275652c47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884: exit status 2 (423.926046ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25: (3.035406326s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-177285 sudo docker system info                                                                                                                             │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cri-dockerd --version                                                                                                                          │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo containerd config dump                                                                                                                         │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo crio config                                                                                                                                    │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ delete  │ -p auto-177285                                                                                                                                                     │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-177285                │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ image   │ embed-certs-468067 image list --format=json                                                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p embed-certs-468067 --alsologtostderr -v=1                                                                                                                       │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ delete  │ -p embed-certs-468067                                                                                                                                              │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ image   │ default-k8s-diff-port-443884 image list --format=json                                                                                                              │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p default-k8s-diff-port-443884 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ delete  │ -p embed-certs-468067                                                                                                                                              │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p custom-flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p kindnet-177285 pgrep -a kubelet                                                                                                                                 │ kindnet-177285               │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:07:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:07:37.908502  320107 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:37.908912  320107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.908929  320107 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:37.908935  320107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.909205  320107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:37.909867  320107 out.go:368] Setting JSON to false
	I1210 23:07:37.911461  320107 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3000,"bootTime":1765405058,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:07:37.911545  320107 start.go:143] virtualization: kvm guest
	I1210 23:07:37.913871  320107 out.go:179] * [custom-flannel-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:07:37.915871  320107 notify.go:221] Checking for updates...
	I1210 23:07:37.915909  320107 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:07:37.917707  320107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:07:37.919398  320107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:37.920853  320107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:07:37.922027  320107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:07:37.923244  320107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:07:37.925236  320107 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925435  320107 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925567  320107 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925708  320107 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:07:37.956295  320107 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:07:37.956424  320107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:38.026918  320107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:38.015144609 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:38.027056  320107 docker.go:319] overlay module found
	I1210 23:07:38.028860  320107 out.go:179] * Using the docker driver based on user configuration
	I1210 23:07:38.030112  320107 start.go:309] selected driver: docker
	I1210 23:07:38.030132  320107 start.go:927] validating driver "docker" against <nil>
	I1210 23:07:38.030148  320107 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:07:38.031013  320107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:38.091993  320107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:38.082259567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:38.092187  320107 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:07:38.092439  320107 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:07:38.094255  320107 out.go:179] * Using Docker driver with root privileges
	I1210 23:07:38.095694  320107 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1210 23:07:38.095729  320107 start_flags.go:351] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 23:07:38.095838  320107 start.go:353] cluster config:
	{Name:custom-flannel-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:38.097464  320107 out.go:179] * Starting "custom-flannel-177285" primary control-plane node in "custom-flannel-177285" cluster
	I1210 23:07:38.098774  320107 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:07:38.100050  320107 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:07:38.101207  320107 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:38.101240  320107 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:07:38.101251  320107 cache.go:65] Caching tarball of preloaded images
	I1210 23:07:38.101333  320107 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:07:38.101316  320107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:07:38.101346  320107 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:07:38.101420  320107 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/custom-flannel-177285/config.json ...
	I1210 23:07:38.101437  320107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/custom-flannel-177285/config.json: {Name:mka7dcd7d87ad0073622c441ccfb568085f77b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:38.122708  320107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:07:38.122734  320107 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:07:38.122751  320107 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:07:38.122780  320107 start.go:360] acquireMachinesLock for custom-flannel-177285: {Name:mk24b43eda837b95eb58e190bfc0ab859bc03a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:07:38.122881  320107 start.go:364] duration metric: took 84.87µs to acquireMachinesLock for "custom-flannel-177285"
	I1210 23:07:38.122904  320107 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-177285 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:38.122985  320107 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:07:39.379577  314972 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:07:39.379630  314972 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:07:39.379780  314972 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:07:39.379878  314972 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:07:39.379923  314972 kubeadm.go:319] OS: Linux
	I1210 23:07:39.379980  314972 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:07:39.380048  314972 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:07:39.380096  314972 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:07:39.380141  314972 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:07:39.380222  314972 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:07:39.380313  314972 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:07:39.380388  314972 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:07:39.380470  314972 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:07:39.380575  314972 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:07:39.380736  314972 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:07:39.380838  314972 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:07:39.380914  314972 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:07:39.382684  314972 out.go:252]   - Generating certificates and keys ...
	I1210 23:07:39.382792  314972 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:07:39.382914  314972 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:07:39.383013  314972 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:07:39.383102  314972 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:07:39.383200  314972 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:07:39.383296  314972 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:07:39.383386  314972 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:07:39.383564  314972 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-177285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:07:39.383658  314972 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:07:39.383816  314972 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-177285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:07:39.383895  314972 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:07:39.383972  314972 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:07:39.384029  314972 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:07:39.384101  314972 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:07:39.384158  314972 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:07:39.384219  314972 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:07:39.384281  314972 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:07:39.384360  314972 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:07:39.384420  314972 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:07:39.384512  314972 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:07:39.384593  314972 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:07:39.388178  314972 out.go:252]   - Booting up control plane ...
	I1210 23:07:39.388298  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:07:39.388412  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:07:39.388512  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:07:39.388633  314972 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:07:39.388749  314972 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:07:39.388887  314972 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:07:39.389031  314972 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:07:39.389123  314972 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:07:39.389323  314972 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:07:39.389500  314972 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:07:39.389586  314972 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500917847s
	I1210 23:07:39.389713  314972 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:07:39.389783  314972 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 23:07:39.389948  314972 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:07:39.390098  314972 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:07:39.390197  314972 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.265097051s
	I1210 23:07:39.390277  314972 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.938914553s
	I1210 23:07:39.390363  314972 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001616395s
	I1210 23:07:39.390484  314972 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:07:39.390639  314972 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:07:39.390763  314972 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:07:39.391001  314972 kubeadm.go:319] [mark-control-plane] Marking the node calico-177285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:07:39.391077  314972 kubeadm.go:319] [bootstrap-token] Using token: lsadig.pq7vihms9arwqoo8
	I1210 23:07:39.396179  314972 out.go:252]   - Configuring RBAC rules ...
	I1210 23:07:39.396359  314972 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:07:39.396749  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:07:39.397007  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:07:39.397150  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:07:39.397307  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:07:39.397428  314972 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:07:39.397567  314972 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:07:39.397624  314972 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:07:39.397850  314972 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:07:39.397873  314972 kubeadm.go:319] 
	I1210 23:07:39.397977  314972 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:07:39.397987  314972 kubeadm.go:319] 
	I1210 23:07:39.398089  314972 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:07:39.398108  314972 kubeadm.go:319] 
	I1210 23:07:39.398139  314972 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:07:39.398289  314972 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:07:39.398392  314972 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:07:39.398419  314972 kubeadm.go:319] 
	I1210 23:07:39.398500  314972 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:07:39.398510  314972 kubeadm.go:319] 
	I1210 23:07:39.398569  314972 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:07:39.398580  314972 kubeadm.go:319] 
	I1210 23:07:39.398733  314972 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:07:39.398879  314972 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:07:39.399029  314972 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:07:39.399037  314972 kubeadm.go:319] 
	I1210 23:07:39.399198  314972 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:07:39.399370  314972 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:07:39.399378  314972 kubeadm.go:319] 
	I1210 23:07:39.399534  314972 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lsadig.pq7vihms9arwqoo8 \
	I1210 23:07:39.399859  314972 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:07:39.399913  314972 kubeadm.go:319] 	--control-plane 
	I1210 23:07:39.399930  314972 kubeadm.go:319] 
	I1210 23:07:39.400097  314972 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:07:39.400108  314972 kubeadm.go:319] 
	I1210 23:07:39.400256  314972 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lsadig.pq7vihms9arwqoo8 \
	I1210 23:07:39.400424  314972 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:07:39.400470  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:39.403307  314972 out.go:179] * Configuring Calico (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 10 23:06:56 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:56.233187145Z" level=info msg="Started container" PID=1748 containerID=642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper id=d3545ce8-a9e8-4c06-8368-1b2202ff8442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=943d1b79a3dd20d4d58c444b393eb185371e643215f6dc2cdb89cda5673c1657
	Dec 10 23:06:57 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:57.194046252Z" level=info msg="Removing container: 770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8" id=539d68e3-c909-4810-9c64-b0e3ca5b184a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:06:57 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:57.205606536Z" level=info msg="Removed container 770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=539d68e3-c909-4810-9c64-b0e3ca5b184a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.104122373Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2aacb26-4f3c-445d-a58a-ee3bd17b194a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.105026527Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39151e43-4436-4f01-8ae9-f58a8f6713a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.106145538Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=d587a570-23db-4245-b8eb-8af4791b752e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.106280972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.111686866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.112202262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.142289732Z" level=info msg="Created container 5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=d587a570-23db-4245-b8eb-8af4791b752e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.142961559Z" level=info msg="Starting container: 5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3" id=ef5ebfeb-0c89-4e85-83bb-6036389da5e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.145158674Z" level=info msg="Started container" PID=1758 containerID=5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper id=ef5ebfeb-0c89-4e85-83bb-6036389da5e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=943d1b79a3dd20d4d58c444b393eb185371e643215f6dc2cdb89cda5673c1657
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.237080054Z" level=info msg="Removing container: 642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e" id=c2c05265-ea69-4f2a-be62-4a94d1e6d55f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.251252497Z" level=info msg="Removed container 642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=c2c05265-ea69-4f2a-be62-4a94d1e6d55f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.249936512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4422a7aa-a97d-4ffc-bfe6-851e2f410db4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.250913836Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1adf94cf-2ae8-4d03-9ba6-c8efed4e6d81 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.252024141Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aaa3cf3d-978d-4c10-a2ce-fa1bd6ea0615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.252158076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257489643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257695993Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/838c231bdd5563e2c06b593a6e343009a468bb9503f51b6c18ff91ddf57b676f/merged/etc/passwd: no such file or directory"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257727841Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/838c231bdd5563e2c06b593a6e343009a468bb9503f51b6c18ff91ddf57b676f/merged/etc/group: no such file or directory"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.258029116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.293672828Z" level=info msg="Created container 03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd: kube-system/storage-provisioner/storage-provisioner" id=aaa3cf3d-978d-4c10-a2ce-fa1bd6ea0615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.294310863Z" level=info msg="Starting container: 03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd" id=8c2cd2ef-513f-4d9f-bd69-6cdbdc1f1810 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.2964386Z" level=info msg="Started container" PID=1772 containerID=03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd description=kube-system/storage-provisioner/storage-provisioner id=8c2cd2ef-513f-4d9f-bd69-6cdbdc1f1810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da027e0dbf04ec4cce3dcbd3f38a5e7c033bed8e9997d027166b4fd35c97735b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	03c79ea398484       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   da027e0dbf04e       storage-provisioner                                    kube-system
	5d482c89a3a3b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   943d1b79a3dd2       dashboard-metrics-scraper-6ffb444bf9-8zkpz             kubernetes-dashboard
	57cd064b10a71       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   7f5d428a0a246       kubernetes-dashboard-855c9754f9-ptwlg                  kubernetes-dashboard
	7044fc6a78df2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   a2502a9aa215c       coredns-66bc5c9577-s8zsm                               kube-system
	cd7423ae4c1f6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   06303059c8971       busybox                                                default
	c3e1a84483438       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   da027e0dbf04e       storage-provisioner                                    kube-system
	ac149f419c261       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f74b906890408       kindnet-wtcv9                                          kube-system
	d8958d68c8e77       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   3258b00960cf2       kube-proxy-lwnhd                                       kube-system
	26242817f00b9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   902fb8bbb4b42       kube-apiserver-default-k8s-diff-port-443884            kube-system
	42eba47182dff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   cf7268e9dd80e       kube-scheduler-default-k8s-diff-port-443884            kube-system
	ea42483f6d60b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   a87ce59e69dd0       etcd-default-k8s-diff-port-443884                      kube-system
	2ca8d279d32da       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   73f91a1a94e21       kube-controller-manager-default-k8s-diff-port-443884   kube-system
	
	
	==> coredns [7044fc6a78df205247cf8e9b36174a84891d5209da8269abec63b4e1e9a01dce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56797 - 59508 "HINFO IN 469420350309246556.5769366022184003114. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036610382s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-443884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-443884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=default-k8s-diff-port-443884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-443884
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:06:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-443884
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                9e4f21fa-7258-4d07-9208-772a36f1e976
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-s8zsm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-443884                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-wtcv9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-443884             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-443884    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-lwnhd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-443884             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zkpz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ptwlg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-443884 event: Registered Node default-k8s-diff-port-443884 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-443884 event: Registered Node default-k8s-diff-port-443884 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2] <==
	{"level":"warn","ts":"2025-12-10T23:06:43.386688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.421578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.428860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.436956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.446630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.464429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.473153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.481761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.490504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.498363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.507522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.516507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.525060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.534066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.544870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.554922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.565945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.573827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.583228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.601257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.613339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.621977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.701436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T23:06:54.084557Z","caller":"traceutil/trace.go:172","msg":"trace[10440103] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"132.762926ms","start":"2025-12-10T23:06:53.951773Z","end":"2025-12-10T23:06:54.084536Z","steps":["trace[10440103] 'process raft request'  (duration: 97.834522ms)","trace[10440103] 'compare'  (duration: 34.648899ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:06:59.414100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.190276ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357225562366975 > lease_revoke:<id:59069b0a83604076>","response":"size:28"}
	
	
	==> kernel <==
	 23:07:41 up 50 min,  0 user,  load average: 7.20, 4.43, 2.52
	Linux default-k8s-diff-port-443884 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac149f419c2611effc42416739c5200b8fc3d7699559c20bd6f60b50894ab601] <==
	I1210 23:06:45.627105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:45.627466       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 23:06:45.627701       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:45.627723       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:45.627748       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:45.987160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:45.987186       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:45.987197       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:45.988027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:06:46.287267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:06:46.287293       1 metrics.go:72] Registering metrics
	I1210 23:06:46.287365       1 controller.go:711] "Syncing nftables rules"
	I1210 23:06:55.986858       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:06:55.986938       1 main.go:301] handling current node
	I1210 23:07:05.992898       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:05.992964       1 main.go:301] handling current node
	I1210 23:07:15.986714       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:15.986764       1 main.go:301] handling current node
	I1210 23:07:25.987719       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:25.987748       1 main.go:301] handling current node
	I1210 23:07:35.987717       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:35.987760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1] <==
	I1210 23:06:44.383481       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:06:44.383485       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 23:06:44.383527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 23:06:44.383472       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:06:44.383726       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 23:06:44.386079       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:06:44.386102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:06:44.395966       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 23:06:44.396077       1 policy_source.go:240] refreshing policies
	I1210 23:06:44.398700       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:06:44.401240       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 23:06:44.401585       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 23:06:44.404751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 23:06:44.443985       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:44.847951       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:44.894552       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:44.933939       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:44.953855       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:44.968775       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:45.035214       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.16.26"}
	I1210 23:06:45.058229       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.64.180"}
	I1210 23:06:45.293001       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:06:47.848425       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:48.198248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:48.347146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f] <==
	I1210 23:06:47.725264       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 23:06:47.736683       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 23:06:47.742202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:06:47.742228       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 23:06:47.742236       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 23:06:47.744566       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:06:47.744638       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 23:06:47.744696       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 23:06:47.744797       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 23:06:47.744822       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:06:47.744902       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-443884"
	I1210 23:06:47.744821       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:06:47.744932       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:06:47.744953       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 23:06:47.745016       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:06:47.745156       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:06:47.745271       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 23:06:47.746151       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 23:06:47.748058       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 23:06:47.748483       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:06:47.751376       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:06:47.751787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:47.753696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:06:47.778018       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:47.804954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d8958d68c8e773b2cb94da3cc6d13f3cf27a5a8ecb168fac8decd50a0af55dfc] <==
	I1210 23:06:45.563515       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:45.640663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:06:45.741815       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:06:45.741857       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 23:06:45.741936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:45.765874       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:45.765946       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:06:45.772153       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:45.772531       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:06:45.772548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:45.774342       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:45.774367       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:45.774391       1 config.go:200] "Starting service config controller"
	I1210 23:06:45.774406       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:45.774413       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:45.774417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:45.774735       1 config.go:309] "Starting node config controller"
	I1210 23:06:45.774746       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:45.774753       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:45.874988       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:45.875135       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:06:45.875134       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865] <==
	I1210 23:06:42.536858       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:44.344272       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:44.344313       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:44.344325       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:44.344334       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:44.406379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 23:06:44.406410       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:44.413410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:44.413502       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:44.413860       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:44.413880       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:06:44.514347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:06:48 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:48.467808     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6htv\" (UniqueName: \"kubernetes.io/projected/38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac-kube-api-access-v6htv\") pod \"dashboard-metrics-scraper-6ffb444bf9-8zkpz\" (UID: \"38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz"
	Dec 10 23:06:48 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:48.467837     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8zkpz\" (UID: \"38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz"
	Dec 10 23:06:53 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:53.560734     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ptwlg" podStartSLOduration=1.48568682 podStartE2EDuration="5.560708424s" podCreationTimestamp="2025-12-10 23:06:48 +0000 UTC" firstStartedPulling="2025-12-10 23:06:48.644381777 +0000 UTC m=+7.660988786" lastFinishedPulling="2025-12-10 23:06:52.719403389 +0000 UTC m=+11.736010390" observedRunningTime="2025-12-10 23:06:53.192841009 +0000 UTC m=+12.209448033" watchObservedRunningTime="2025-12-10 23:06:53.560708424 +0000 UTC m=+12.577315443"
	Dec 10 23:06:53 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:53.866010     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 23:06:56 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:56.187775     718 scope.go:117] "RemoveContainer" containerID="770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:57.192761     718 scope.go:117] "RemoveContainer" containerID="770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:57.192941     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: E1210 23:06:57.193154     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:06:58 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:58.197245     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:06:58 default-k8s-diff-port-443884 kubelet[718]: E1210 23:06:58.197450     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:00 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:00.325863     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:00 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:00.326087     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.103569     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.234989     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.235255     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:12.235457     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:16 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:16.249467     718 scope.go:117] "RemoveContainer" containerID="c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910"
	Dec 10 23:07:20 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:20.325221     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:20 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:20.325430     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:31 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:31.104508     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:31 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:31.104748     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: kubelet.service: Consumed 1.832s CPU time.
	
	
	==> kubernetes-dashboard [57cd064b10a71dd8a8609addc81b713e938d596421399354feaee45d87ab2b89] <==
	2025/12/10 23:06:52 Using namespace: kubernetes-dashboard
	2025/12/10 23:06:52 Using in-cluster config to connect to apiserver
	2025/12/10 23:06:52 Using secret token for csrf signing
	2025/12/10 23:06:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:06:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:06:52 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 23:06:52 Generating JWE encryption key
	2025/12/10 23:06:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:06:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:06:53 Initializing JWE encryption key from synchronized object
	2025/12/10 23:06:53 Creating in-cluster Sidecar client
	2025/12/10 23:06:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:53 Serving insecurely on HTTP port: 9090
	2025/12/10 23:07:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:52 Starting overwatch
	
	
	==> storage-provisioner [03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd] <==
	I1210 23:07:16.309551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 23:07:16.319376       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:07:16.319464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:07:16.321588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:19.778243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.039209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:27.638222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.692426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.714867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.720200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:33.720366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:07:33.720509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a!
	I1210 23:07:33.720512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df00d01c-2573-4975-bde4-5f3658985b9c", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a became leader
	W1210 23:07:33.722457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.726079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:33.821605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a!
	W1210 23:07:35.729438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:35.734484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:37.741806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:37.746169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:39.749693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:39.754474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:41.757827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:41.880387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910] <==
	I1210 23:06:45.527403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:07:15.531369       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884: exit status 2 (409.44811ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-443884
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-443884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	        "Created": "2025-12-10T23:05:26.959123143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:06:34.325138246Z",
	            "FinishedAt": "2025-12-10T23:06:33.197843504Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/hosts",
	        "LogPath": "/var/lib/docker/containers/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b/a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b-json.log",
	        "Name": "/default-k8s-diff-port-443884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-443884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-443884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8275652c47b959ba18bcf028be372f8600614ac7f3d641308b526444818d51b",
	                "LowerDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b-init/diff:/var/lib/docker/overlay2/dcbbabe0ad6e2d3bee9c327fe340e7dbd996d625797917e8c5f83458eab4210c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7e5781b680ad4b06c430331432e57879666e9603237e138fcd42ece35aabe5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-443884",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-443884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-443884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-443884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "375b66f2a05ac3c87d3c82dd52ed59b5f004f75bb8c3dca84798cf0d3236e69f",
	            "SandboxKey": "/var/run/docker/netns/375b66f2a05a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-443884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8875699386e55b3c7ba6f71ae6cb594bed837dd60b39b87d708bd26d3360a926",
	                    "EndpointID": "041dee59b71dd88f49caedfdc95cfd31e899eb2839f139109feefb260be1a67a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5e:89:a7:4f:20:aa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-443884",
	                        "a8275652c47b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884: exit status 2 (366.338638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-443884 logs -n 25: (1.498407301s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-177285 sudo docker system info                                                                                                                             │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cri-dockerd --version                                                                                                                          │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p auto-177285 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo containerd config dump                                                                                                                         │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ ssh     │ -p auto-177285 sudo crio config                                                                                                                                    │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ delete  │ -p auto-177285                                                                                                                                                     │ auto-177285                  │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-177285                │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ image   │ embed-certs-468067 image list --format=json                                                                                                                        │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p embed-certs-468067 --alsologtostderr -v=1                                                                                                                       │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ delete  │ -p embed-certs-468067                                                                                                                                              │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ image   │ default-k8s-diff-port-443884 image list --format=json                                                                                                              │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ pause   │ -p default-k8s-diff-port-443884 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-443884 │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ delete  │ -p embed-certs-468067                                                                                                                                              │ embed-certs-468067           │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	│ start   │ -p custom-flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-177285        │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │                     │
	│ ssh     │ -p kindnet-177285 pgrep -a kubelet                                                                                                                                 │ kindnet-177285               │ jenkins │ v1.37.0 │ 10 Dec 25 23:07 UTC │ 10 Dec 25 23:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:07:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:07:37.908502  320107 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:07:37.908912  320107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.908929  320107 out.go:374] Setting ErrFile to fd 2...
	I1210 23:07:37.908935  320107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:07:37.909205  320107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:07:37.909867  320107 out.go:368] Setting JSON to false
	I1210 23:07:37.911461  320107 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3000,"bootTime":1765405058,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:07:37.911545  320107 start.go:143] virtualization: kvm guest
	I1210 23:07:37.913871  320107 out.go:179] * [custom-flannel-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:07:37.915871  320107 notify.go:221] Checking for updates...
	I1210 23:07:37.915909  320107 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:07:37.917707  320107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:07:37.919398  320107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:37.920853  320107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:07:37.922027  320107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:07:37.923244  320107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:07:37.925236  320107 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925435  320107 config.go:182] Loaded profile config "default-k8s-diff-port-443884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925567  320107 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:37.925708  320107 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:07:37.956295  320107 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:07:37.956424  320107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:38.026918  320107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:38.015144609 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:38.027056  320107 docker.go:319] overlay module found
	I1210 23:07:38.028860  320107 out.go:179] * Using the docker driver based on user configuration
	I1210 23:07:38.030112  320107 start.go:309] selected driver: docker
	I1210 23:07:38.030132  320107 start.go:927] validating driver "docker" against <nil>
	I1210 23:07:38.030148  320107 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:07:38.031013  320107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:07:38.091993  320107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 23:07:38.082259567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:07:38.092187  320107 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:07:38.092439  320107 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:07:38.094255  320107 out.go:179] * Using Docker driver with root privileges
	I1210 23:07:38.095694  320107 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1210 23:07:38.095729  320107 start_flags.go:351] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 23:07:38.095838  320107 start.go:353] cluster config:
	{Name:custom-flannel-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-177285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:07:38.097464  320107 out.go:179] * Starting "custom-flannel-177285" primary control-plane node in "custom-flannel-177285" cluster
	I1210 23:07:38.098774  320107 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:07:38.100050  320107 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:07:38.101207  320107 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:38.101240  320107 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:07:38.101251  320107 cache.go:65] Caching tarball of preloaded images
	I1210 23:07:38.101333  320107 preload.go:238] Found /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:07:38.101316  320107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:07:38.101346  320107 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:07:38.101420  320107 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/custom-flannel-177285/config.json ...
	I1210 23:07:38.101437  320107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/custom-flannel-177285/config.json: {Name:mka7dcd7d87ad0073622c441ccfb568085f77b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:38.122708  320107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 23:07:38.122734  320107 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 23:07:38.122751  320107 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:07:38.122780  320107 start.go:360] acquireMachinesLock for custom-flannel-177285: {Name:mk24b43eda837b95eb58e190bfc0ab859bc03a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:07:38.122881  320107 start.go:364] duration metric: took 84.87µs to acquireMachinesLock for "custom-flannel-177285"
	I1210 23:07:38.122904  320107 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-177285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-177285 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:38.122985  320107 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:07:39.379577  314972 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:07:39.379630  314972 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:07:39.379780  314972 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:07:39.379878  314972 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1210 23:07:39.379923  314972 kubeadm.go:319] OS: Linux
	I1210 23:07:39.379980  314972 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:07:39.380048  314972 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:07:39.380096  314972 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:07:39.380141  314972 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:07:39.380222  314972 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:07:39.380313  314972 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:07:39.380388  314972 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:07:39.380470  314972 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 23:07:39.380575  314972 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:07:39.380736  314972 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:07:39.380838  314972 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:07:39.380914  314972 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:07:39.382684  314972 out.go:252]   - Generating certificates and keys ...
	I1210 23:07:39.382792  314972 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:07:39.382914  314972 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:07:39.383013  314972 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:07:39.383102  314972 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:07:39.383200  314972 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:07:39.383296  314972 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:07:39.383386  314972 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:07:39.383564  314972 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-177285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:07:39.383658  314972 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:07:39.383816  314972 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-177285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 23:07:39.383895  314972 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:07:39.383972  314972 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:07:39.384029  314972 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:07:39.384101  314972 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:07:39.384158  314972 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:07:39.384219  314972 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:07:39.384281  314972 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:07:39.384360  314972 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:07:39.384420  314972 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:07:39.384512  314972 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:07:39.384593  314972 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:07:39.388178  314972 out.go:252]   - Booting up control plane ...
	I1210 23:07:39.388298  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:07:39.388412  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:07:39.388512  314972 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:07:39.388633  314972 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:07:39.388749  314972 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:07:39.388887  314972 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:07:39.389031  314972 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:07:39.389123  314972 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:07:39.389323  314972 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:07:39.389500  314972 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:07:39.389586  314972 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500917847s
	I1210 23:07:39.389713  314972 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:07:39.389783  314972 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 23:07:39.389948  314972 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:07:39.390098  314972 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:07:39.390197  314972 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.265097051s
	I1210 23:07:39.390277  314972 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.938914553s
	I1210 23:07:39.390363  314972 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001616395s
	I1210 23:07:39.390484  314972 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:07:39.390639  314972 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:07:39.390763  314972 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:07:39.391001  314972 kubeadm.go:319] [mark-control-plane] Marking the node calico-177285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:07:39.391077  314972 kubeadm.go:319] [bootstrap-token] Using token: lsadig.pq7vihms9arwqoo8
	I1210 23:07:39.396179  314972 out.go:252]   - Configuring RBAC rules ...
	I1210 23:07:39.396359  314972 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:07:39.396749  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:07:39.397007  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:07:39.397150  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:07:39.397307  314972 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:07:39.397428  314972 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:07:39.397567  314972 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:07:39.397624  314972 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:07:39.397850  314972 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:07:39.397873  314972 kubeadm.go:319] 
	I1210 23:07:39.397977  314972 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:07:39.397987  314972 kubeadm.go:319] 
	I1210 23:07:39.398089  314972 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:07:39.398108  314972 kubeadm.go:319] 
	I1210 23:07:39.398139  314972 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:07:39.398289  314972 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:07:39.398392  314972 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:07:39.398419  314972 kubeadm.go:319] 
	I1210 23:07:39.398500  314972 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:07:39.398510  314972 kubeadm.go:319] 
	I1210 23:07:39.398569  314972 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:07:39.398580  314972 kubeadm.go:319] 
	I1210 23:07:39.398733  314972 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:07:39.398879  314972 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:07:39.399029  314972 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:07:39.399037  314972 kubeadm.go:319] 
	I1210 23:07:39.399198  314972 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:07:39.399370  314972 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:07:39.399378  314972 kubeadm.go:319] 
	I1210 23:07:39.399534  314972 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lsadig.pq7vihms9arwqoo8 \
	I1210 23:07:39.399859  314972 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 \
	I1210 23:07:39.399913  314972 kubeadm.go:319] 	--control-plane 
	I1210 23:07:39.399930  314972 kubeadm.go:319] 
	I1210 23:07:39.400097  314972 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:07:39.400108  314972 kubeadm.go:319] 
	I1210 23:07:39.400256  314972 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lsadig.pq7vihms9arwqoo8 \
	I1210 23:07:39.400424  314972 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8e17e4a5dbdfabf76880e4f99b7a6e0307fab513adf538e7238c44f4f98228c1 
	I1210 23:07:39.400470  314972 cni.go:84] Creating CNI manager for "calico"
	I1210 23:07:39.403307  314972 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1210 23:07:38.124974  320107 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 23:07:38.125227  320107 start.go:159] libmachine.API.Create for "custom-flannel-177285" (driver="docker")
	I1210 23:07:38.125257  320107 client.go:173] LocalClient.Create starting
	I1210 23:07:38.125305  320107 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/ca.pem
	I1210 23:07:38.125344  320107 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:38.125363  320107 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:38.125425  320107 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-5100/.minikube/certs/cert.pem
	I1210 23:07:38.125444  320107 main.go:143] libmachine: Decoding PEM data...
	I1210 23:07:38.125454  320107 main.go:143] libmachine: Parsing certificate...
	I1210 23:07:38.125815  320107 cli_runner.go:164] Run: docker network inspect custom-flannel-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:07:38.146090  320107 cli_runner.go:211] docker network inspect custom-flannel-177285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:07:38.146186  320107 network_create.go:284] running [docker network inspect custom-flannel-177285] to gather additional debugging logs...
	I1210 23:07:38.146210  320107 cli_runner.go:164] Run: docker network inspect custom-flannel-177285
	W1210 23:07:38.164265  320107 cli_runner.go:211] docker network inspect custom-flannel-177285 returned with exit code 1
	I1210 23:07:38.164293  320107 network_create.go:287] error running [docker network inspect custom-flannel-177285]: docker network inspect custom-flannel-177285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-177285 not found
	I1210 23:07:38.164309  320107 network_create.go:289] output of [docker network inspect custom-flannel-177285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-177285 not found
	
	** /stderr **
	I1210 23:07:38.164437  320107 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:07:38.183181  320107 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
	I1210 23:07:38.184118  320107 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76f83b592538 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:0e:f0:db:bb:fd} reservation:<nil>}
	I1210 23:07:38.185071  320107 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16b8fd5f1653 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:71:cf:dd:99:92} reservation:<nil>}
	I1210 23:07:38.185692  320107 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8875699386e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:89:d4:9b:b9:bc} reservation:<nil>}
	I1210 23:07:38.186446  320107 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a5b1d987b87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ca:3e:51:dc:a7:74} reservation:<nil>}
	I1210 23:07:38.187136  320107 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-c944c42d058e IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:62:36:3b:26:20:5f} reservation:<nil>}
	I1210 23:07:38.187926  320107 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1cd60}
	I1210 23:07:38.187963  320107 network_create.go:124] attempt to create docker network custom-flannel-177285 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 23:07:38.188015  320107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-177285 custom-flannel-177285
	I1210 23:07:38.237427  320107 network_create.go:108] docker network custom-flannel-177285 192.168.103.0/24 created
	I1210 23:07:38.237456  320107 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-177285" container
	I1210 23:07:38.237568  320107 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:07:38.255714  320107 cli_runner.go:164] Run: docker volume create custom-flannel-177285 --label name.minikube.sigs.k8s.io=custom-flannel-177285 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:07:38.274718  320107 oci.go:103] Successfully created a docker volume custom-flannel-177285
	I1210 23:07:38.274812  320107 cli_runner.go:164] Run: docker run --rm --name custom-flannel-177285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-177285 --entrypoint /usr/bin/test -v custom-flannel-177285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:07:38.727948  320107 oci.go:107] Successfully prepared a docker volume custom-flannel-177285
	I1210 23:07:38.728022  320107 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:07:38.728039  320107 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:07:38.728118  320107 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-177285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:07:39.405123  314972 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:07:39.405145  314972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1210 23:07:39.420807  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:07:40.691891  314972 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.271042263s)
	I1210 23:07:40.691941  314972 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:07:40.692116  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:40.692259  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-177285 minikube.k8s.io/updated_at=2025_12_10T23_07_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=calico-177285 minikube.k8s.io/primary=true
	I1210 23:07:40.704265  314972 ops.go:34] apiserver oom_adj: -16
	I1210 23:07:40.808454  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:41.308700  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:41.808523  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:42.308683  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:42.810009  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:43.309302  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:43.808574  314972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:07:43.908608  314972 kubeadm.go:1114] duration metric: took 3.216467394s to wait for elevateKubeSystemPrivileges
	I1210 23:07:43.908763  314972 kubeadm.go:403] duration metric: took 15.683019638s to StartCluster
	I1210 23:07:43.908790  314972 settings.go:142] acquiring lock: {Name:mk331e18459f848c5635f4b94ea79f852f6bf8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:43.908897  314972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:07:43.911110  314972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5100/kubeconfig: {Name:mk5dc3acbc451e231431abd9ddf761bfe3eac309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:07:43.911373  314972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:07:43.911405  314972 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:07:43.911495  314972 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:07:43.911715  314972 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:07:43.911725  314972 addons.go:70] Setting storage-provisioner=true in profile "calico-177285"
	I1210 23:07:43.911742  314972 addons.go:70] Setting default-storageclass=true in profile "calico-177285"
	I1210 23:07:43.911790  314972 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-177285"
	I1210 23:07:43.911751  314972 addons.go:239] Setting addon storage-provisioner=true in "calico-177285"
	I1210 23:07:43.911874  314972 host.go:66] Checking if "calico-177285" exists ...
	I1210 23:07:43.912221  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:43.912404  314972 cli_runner.go:164] Run: docker container inspect calico-177285 --format={{.State.Status}}
	I1210 23:07:43.913375  314972 out.go:179] * Verifying Kubernetes components...
	I1210 23:07:43.914871  314972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:07:43.947465  314972 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 10 23:06:56 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:56.233187145Z" level=info msg="Started container" PID=1748 containerID=642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper id=d3545ce8-a9e8-4c06-8368-1b2202ff8442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=943d1b79a3dd20d4d58c444b393eb185371e643215f6dc2cdb89cda5673c1657
	Dec 10 23:06:57 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:57.194046252Z" level=info msg="Removing container: 770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8" id=539d68e3-c909-4810-9c64-b0e3ca5b184a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:06:57 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:06:57.205606536Z" level=info msg="Removed container 770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=539d68e3-c909-4810-9c64-b0e3ca5b184a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.104122373Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2aacb26-4f3c-445d-a58a-ee3bd17b194a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.105026527Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=39151e43-4436-4f01-8ae9-f58a8f6713a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.106145538Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=d587a570-23db-4245-b8eb-8af4791b752e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.106280972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.111686866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.112202262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.142289732Z" level=info msg="Created container 5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=d587a570-23db-4245-b8eb-8af4791b752e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.142961559Z" level=info msg="Starting container: 5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3" id=ef5ebfeb-0c89-4e85-83bb-6036389da5e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.145158674Z" level=info msg="Started container" PID=1758 containerID=5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper id=ef5ebfeb-0c89-4e85-83bb-6036389da5e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=943d1b79a3dd20d4d58c444b393eb185371e643215f6dc2cdb89cda5673c1657
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.237080054Z" level=info msg="Removing container: 642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e" id=c2c05265-ea69-4f2a-be62-4a94d1e6d55f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:12 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:12.251252497Z" level=info msg="Removed container 642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz/dashboard-metrics-scraper" id=c2c05265-ea69-4f2a-be62-4a94d1e6d55f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.249936512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4422a7aa-a97d-4ffc-bfe6-851e2f410db4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.250913836Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1adf94cf-2ae8-4d03-9ba6-c8efed4e6d81 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.252024141Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aaa3cf3d-978d-4c10-a2ce-fa1bd6ea0615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.252158076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257489643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257695993Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/838c231bdd5563e2c06b593a6e343009a468bb9503f51b6c18ff91ddf57b676f/merged/etc/passwd: no such file or directory"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.257727841Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/838c231bdd5563e2c06b593a6e343009a468bb9503f51b6c18ff91ddf57b676f/merged/etc/group: no such file or directory"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.258029116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.293672828Z" level=info msg="Created container 03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd: kube-system/storage-provisioner/storage-provisioner" id=aaa3cf3d-978d-4c10-a2ce-fa1bd6ea0615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.294310863Z" level=info msg="Starting container: 03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd" id=8c2cd2ef-513f-4d9f-bd69-6cdbdc1f1810 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:07:16 default-k8s-diff-port-443884 crio[560]: time="2025-12-10T23:07:16.2964386Z" level=info msg="Started container" PID=1772 containerID=03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd description=kube-system/storage-provisioner/storage-provisioner id=8c2cd2ef-513f-4d9f-bd69-6cdbdc1f1810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da027e0dbf04ec4cce3dcbd3f38a5e7c033bed8e9997d027166b4fd35c97735b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	03c79ea398484       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   da027e0dbf04e       storage-provisioner                                    kube-system
	5d482c89a3a3b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   943d1b79a3dd2       dashboard-metrics-scraper-6ffb444bf9-8zkpz             kubernetes-dashboard
	57cd064b10a71       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   52 seconds ago       Running             kubernetes-dashboard        0                   7f5d428a0a246       kubernetes-dashboard-855c9754f9-ptwlg                  kubernetes-dashboard
	7044fc6a78df2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   a2502a9aa215c       coredns-66bc5c9577-s8zsm                               kube-system
	cd7423ae4c1f6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   06303059c8971       busybox                                                default
	c3e1a84483438       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   da027e0dbf04e       storage-provisioner                                    kube-system
	ac149f419c261       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   f74b906890408       kindnet-wtcv9                                          kube-system
	d8958d68c8e77       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           59 seconds ago       Running             kube-proxy                  0                   3258b00960cf2       kube-proxy-lwnhd                                       kube-system
	26242817f00b9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   902fb8bbb4b42       kube-apiserver-default-k8s-diff-port-443884            kube-system
	42eba47182dff       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   cf7268e9dd80e       kube-scheduler-default-k8s-diff-port-443884            kube-system
	ea42483f6d60b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   a87ce59e69dd0       etcd-default-k8s-diff-port-443884                      kube-system
	2ca8d279d32da       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   73f91a1a94e21       kube-controller-manager-default-k8s-diff-port-443884   kube-system
	
	
	==> coredns [7044fc6a78df205247cf8e9b36174a84891d5209da8269abec63b4e1e9a01dce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56797 - 59508 "HINFO IN 469420350309246556.5769366022184003114. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036610382s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-443884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-443884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=default-k8s-diff-port-443884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_05_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-443884
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:07:15 +0000   Wed, 10 Dec 2025 23:06:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-443884
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                9e4f21fa-7258-4d07-9208-772a36f1e976
	  Boot ID:                    1773a78d-1ebd-4d5a-a2d4-f9c220d577e4
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-s8zsm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-default-k8s-diff-port-443884                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-wtcv9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-default-k8s-diff-port-443884             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-443884    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-lwnhd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-default-k8s-diff-port-443884             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zkpz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ptwlg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node default-k8s-diff-port-443884 event: Registered Node default-k8s-diff-port-443884 in Controller
	  Normal  NodeReady                103s               kubelet          Node default-k8s-diff-port-443884 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-443884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-diff-port-443884 event: Registered Node default-k8s-diff-port-443884 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[  +8.255119] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[ +16.382308] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 b3 3f 79 fb 24 32 04 91 a6 d2 85 08 00
	[Dec10 22:34] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.013766] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.022968] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000010] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023808] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023851] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +1.023908] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +2.047745] IPv4: martian source 10.99.210.142 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +4.031556] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	[  +8.447105] IPv4: martian source 10.244.0.4 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 2d f1 67 ce 4e 8e d8 4e b3 8c 35 08 00
	
	
	==> etcd [ea42483f6d60b597b41813f8c197425247e1517c66f962c60b95615a9d41b5f2] <==
	{"level":"warn","ts":"2025-12-10T23:06:43.421578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.428860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.436956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.446630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.464429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.473153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.481761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.490504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.498363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.507522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.516507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.525060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.534066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.544870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.554922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.565945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.573827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.583228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.601257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.613339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.621977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:06:43.701436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T23:06:54.084557Z","caller":"traceutil/trace.go:172","msg":"trace[10440103] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"132.762926ms","start":"2025-12-10T23:06:53.951773Z","end":"2025-12-10T23:06:54.084536Z","steps":["trace[10440103] 'process raft request'  (duration: 97.834522ms)","trace[10440103] 'compare'  (duration: 34.648899ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T23:06:59.414100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.190276ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357225562366975 > lease_revoke:<id:59069b0a83604076>","response":"size:28"}
	{"level":"info","ts":"2025-12-10T23:07:41.879015Z","caller":"traceutil/trace.go:172","msg":"trace[786684459] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"118.696423ms","start":"2025-12-10T23:07:41.760300Z","end":"2025-12-10T23:07:41.878997Z","steps":["trace[786684459] 'process raft request'  (duration: 118.562809ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:07:45 up 50 min,  0 user,  load average: 8.06, 4.65, 2.60
	Linux default-k8s-diff-port-443884 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac149f419c2611effc42416739c5200b8fc3d7699559c20bd6f60b50894ab601] <==
	I1210 23:06:45.627105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 23:06:45.627466       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 23:06:45.627701       1 main.go:148] setting mtu 1500 for CNI 
	I1210 23:06:45.627723       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 23:06:45.627748       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T23:06:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 23:06:45.987160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 23:06:45.987186       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 23:06:45.987197       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 23:06:45.988027       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 23:06:46.287267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:06:46.287293       1 metrics.go:72] Registering metrics
	I1210 23:06:46.287365       1 controller.go:711] "Syncing nftables rules"
	I1210 23:06:55.986858       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:06:55.986938       1 main.go:301] handling current node
	I1210 23:07:05.992898       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:05.992964       1 main.go:301] handling current node
	I1210 23:07:15.986714       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:15.986764       1 main.go:301] handling current node
	I1210 23:07:25.987719       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:25.987748       1 main.go:301] handling current node
	I1210 23:07:35.987717       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 23:07:35.987760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26242817f00b90fd0a4c8e63cf57e1076dba564702aff5c8b30366e73a9439c1] <==
	I1210 23:06:44.383481       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:06:44.383485       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 23:06:44.383527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 23:06:44.383472       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 23:06:44.383726       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 23:06:44.386079       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:06:44.386102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:06:44.395966       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 23:06:44.396077       1 policy_source.go:240] refreshing policies
	I1210 23:06:44.398700       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:06:44.401240       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 23:06:44.401585       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 23:06:44.404751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 23:06:44.443985       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:06:44.847951       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 23:06:44.894552       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:06:44.933939       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:06:44.953855       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:06:44.968775       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:06:45.035214       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.16.26"}
	I1210 23:06:45.058229       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.64.180"}
	I1210 23:06:45.293001       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:06:47.848425       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 23:06:48.198248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:06:48.347146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2ca8d279d32da69db3db788b8b68af302c7858eb58288c38b85d30bf3c63bd4f] <==
	I1210 23:06:47.725264       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 23:06:47.736683       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 23:06:47.742202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:06:47.742228       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 23:06:47.742236       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 23:06:47.744566       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:06:47.744638       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 23:06:47.744696       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 23:06:47.744797       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 23:06:47.744822       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:06:47.744902       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-443884"
	I1210 23:06:47.744821       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:06:47.744932       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:06:47.744953       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 23:06:47.745016       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:06:47.745156       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:06:47.745271       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 23:06:47.746151       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 23:06:47.748058       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 23:06:47.748483       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:06:47.751376       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:06:47.751787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:47.753696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:06:47.778018       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:06:47.804954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d8958d68c8e773b2cb94da3cc6d13f3cf27a5a8ecb168fac8decd50a0af55dfc] <==
	I1210 23:06:45.563515       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:06:45.640663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:06:45.741815       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:06:45.741857       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 23:06:45.741936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:06:45.765874       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:06:45.765946       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:06:45.772153       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:06:45.772531       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:06:45.772548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:45.774342       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:06:45.774367       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:06:45.774391       1 config.go:200] "Starting service config controller"
	I1210 23:06:45.774406       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:06:45.774413       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:06:45.774417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:06:45.774735       1 config.go:309] "Starting node config controller"
	I1210 23:06:45.774746       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:06:45.774753       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:06:45.874988       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:06:45.875135       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:06:45.875134       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42eba47182dff199397f920b2045fc29f292e886ad5a246ae881fddf72f98865] <==
	I1210 23:06:42.536858       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:06:44.344272       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:06:44.344313       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:06:44.344325       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:06:44.344334       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:06:44.406379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 23:06:44.406410       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:06:44.413410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:44.413502       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:06:44.413860       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:06:44.413880       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:06:44.514347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:06:48 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:48.467808     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6htv\" (UniqueName: \"kubernetes.io/projected/38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac-kube-api-access-v6htv\") pod \"dashboard-metrics-scraper-6ffb444bf9-8zkpz\" (UID: \"38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz"
	Dec 10 23:06:48 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:48.467837     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8zkpz\" (UID: \"38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz"
	Dec 10 23:06:53 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:53.560734     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ptwlg" podStartSLOduration=1.48568682 podStartE2EDuration="5.560708424s" podCreationTimestamp="2025-12-10 23:06:48 +0000 UTC" firstStartedPulling="2025-12-10 23:06:48.644381777 +0000 UTC m=+7.660988786" lastFinishedPulling="2025-12-10 23:06:52.719403389 +0000 UTC m=+11.736010390" observedRunningTime="2025-12-10 23:06:53.192841009 +0000 UTC m=+12.209448033" watchObservedRunningTime="2025-12-10 23:06:53.560708424 +0000 UTC m=+12.577315443"
	Dec 10 23:06:53 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:53.866010     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 23:06:56 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:56.187775     718 scope.go:117] "RemoveContainer" containerID="770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:57.192761     718 scope.go:117] "RemoveContainer" containerID="770b119a8d59bc0ff3e61ef1884847a4f66eb0d5af0dc2b1d5a27abe46da06c8"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:57.192941     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:06:57 default-k8s-diff-port-443884 kubelet[718]: E1210 23:06:57.193154     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:06:58 default-k8s-diff-port-443884 kubelet[718]: I1210 23:06:58.197245     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:06:58 default-k8s-diff-port-443884 kubelet[718]: E1210 23:06:58.197450     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:00 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:00.325863     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:00 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:00.326087     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.103569     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.234989     718 scope.go:117] "RemoveContainer" containerID="642027da663be9ea331947ba3c89714bd25afb8eaa74df8c5c05c76ce6135e2e"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:12.235255     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:12 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:12.235457     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:16 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:16.249467     718 scope.go:117] "RemoveContainer" containerID="c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910"
	Dec 10 23:07:20 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:20.325221     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:20 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:20.325430     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:31 default-k8s-diff-port-443884 kubelet[718]: I1210 23:07:31.104508     718 scope.go:117] "RemoveContainer" containerID="5d482c89a3a3b211adc90c5caa3d3507faa5aa2ce2b2a0bbca0e119ec723aea3"
	Dec 10 23:07:31 default-k8s-diff-port-443884 kubelet[718]: E1210 23:07:31.104748     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zkpz_kubernetes-dashboard(38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zkpz" podUID="38c7a832-c4e1-4ba5-b473-23dbf3ccd0ac"
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 23:07:37 default-k8s-diff-port-443884 systemd[1]: kubelet.service: Consumed 1.832s CPU time.
	
	
	==> kubernetes-dashboard [57cd064b10a71dd8a8609addc81b713e938d596421399354feaee45d87ab2b89] <==
	2025/12/10 23:06:52 Using namespace: kubernetes-dashboard
	2025/12/10 23:06:52 Using in-cluster config to connect to apiserver
	2025/12/10 23:06:52 Using secret token for csrf signing
	2025/12/10 23:06:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 23:06:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 23:06:52 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 23:06:52 Generating JWE encryption key
	2025/12/10 23:06:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 23:06:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 23:06:53 Initializing JWE encryption key from synchronized object
	2025/12/10 23:06:53 Creating in-cluster Sidecar client
	2025/12/10 23:06:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:53 Serving insecurely on HTTP port: 9090
	2025/12/10 23:07:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 23:06:52 Starting overwatch
	
	
	==> storage-provisioner [03c79ea3984846b82fbe72f24ccfbb62924cc13acea530b9157dcdb4bd3de3cd] <==
	I1210 23:07:16.319376       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 23:07:16.319464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 23:07:16.321588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:19.778243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:24.039209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:27.638222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:30.692426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.714867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.720200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:33.720366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 23:07:33.720509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a!
	I1210 23:07:33.720512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df00d01c-2573-4975-bde4-5f3658985b9c", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a became leader
	W1210 23:07:33.722457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:33.726079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 23:07:33.821605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-443884_2d7d06be-2d4e-4aab-bdd8-34933ef40b8a!
	W1210 23:07:35.729438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:35.734484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:37.741806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:37.746169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:39.749693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:39.754474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:41.757827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:41.880387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:43.885684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:07:43.896306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c3e1a844834388d4103eddef3963bc9e96d501cd66480d94d9fe59129e0f7910] <==
	I1210 23:06:45.527403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 23:07:15.531369       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884: exit status 2 (409.916528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.55s)

                                                
                                    

Test pass (352/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.45
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 2.95
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.34
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.83
31 TestOffline 54.47
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 121.29
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 9.42
57 TestAddons/StoppedEnableDisable 16.73
58 TestCertOptions 26.84
59 TestCertExpiration 218.68
61 TestForceSystemdFlag 25.65
62 TestForceSystemdEnv 33.95
67 TestErrorSpam/setup 18.74
68 TestErrorSpam/start 0.64
69 TestErrorSpam/status 0.93
70 TestErrorSpam/pause 6.57
71 TestErrorSpam/unpause 5.08
72 TestErrorSpam/stop 12.64
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.62
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.17
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
84 TestFunctional/serial/CacheCmd/cache/add_local 0.92
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 67.79
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.29
96 TestFunctional/serial/InvalidService 6.86
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 7.21
100 TestFunctional/parallel/DryRun 0.44
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 0.99
106 TestFunctional/parallel/ServiceCmdConnect 10.73
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 43.97
110 TestFunctional/parallel/SSHCmd 0.59
111 TestFunctional/parallel/CpCmd 1.9
112 TestFunctional/parallel/MySQL 20.21
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.88
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
122 TestFunctional/parallel/License 0.29
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 0.55
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.59
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 2.89
130 TestFunctional/parallel/ImageCommands/Setup 0.5
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
135 TestFunctional/parallel/ServiceCmd/DeployApp 7.18
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 7.2
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.99
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.03
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
147 TestFunctional/parallel/ServiceCmd/List 0.61
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
150 TestFunctional/parallel/ServiceCmd/Format 0.45
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 12.63
153 TestFunctional/parallel/ServiceCmd/URL 0.44
154 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
159 TestFunctional/parallel/ProfileCmd/profile_list 0.44
160 TestFunctional/parallel/MountCmd/any-port 7.8
161 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
162 TestFunctional/parallel/MountCmd/specific-port 1.7
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 39.95
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.25
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.63
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.86
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.56
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 39.39
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.87
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 7.19
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.94
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 7.51
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.2
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 22.36
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.53
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.83
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.33
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.74
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.64
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.09
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.62
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 2.18
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.17
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.17
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.28
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.16
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 0.96
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.6
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 8.21
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.52
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.51
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.37
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.43
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.38
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.38
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.93
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.18
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.18
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.15
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 109.33
266 TestMultiControlPlane/serial/DeployApp 5.81
267 TestMultiControlPlane/serial/PingHostFromPods 1.04
268 TestMultiControlPlane/serial/AddWorkerNode 26.69
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
271 TestMultiControlPlane/serial/CopyFile 16.91
272 TestMultiControlPlane/serial/StopSecondaryNode 13.33
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.17
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.09
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.59
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
279 TestMultiControlPlane/serial/StopCluster 49.56
280 TestMultiControlPlane/serial/RestartCluster 53.57
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
282 TestMultiControlPlane/serial/AddSecondaryNode 64.76
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
288 TestJSONOutput/start/Command 71.21
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.19
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 30.22
314 TestKicCustomNetwork/use_default_bridge_network 21.82
315 TestKicExistingNetwork 25.34
316 TestKicCustomSubnet 26.29
317 TestKicStaticIP 28.16
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 49.73
322 TestMountStart/serial/StartWithMountFirst 7.69
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 7.66
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.21
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 64.43
334 TestMultiNode/serial/DeployApp2Nodes 3.44
335 TestMultiNode/serial/PingHostFrom2Pods 0.71
336 TestMultiNode/serial/AddNode 53.26
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.7
340 TestMultiNode/serial/StopNode 2.24
341 TestMultiNode/serial/StartAfterStop 7.15
342 TestMultiNode/serial/RestartKeepsNodes 83.65
343 TestMultiNode/serial/DeleteNode 5.23
344 TestMultiNode/serial/StopMultiNode 28.58
345 TestMultiNode/serial/RestartMultiNode 47.92
346 TestMultiNode/serial/ValidateNameConflict 21.75
351 TestPreload 98.88
353 TestScheduledStopUnix 95.17
356 TestInsufficientStorage 11.8
357 TestRunningBinaryUpgrade 44.19
359 TestKubernetesUpgrade 299.46
360 TestMissingContainerUpgrade 93.71
362 TestPause/serial/Start 49.39
363 TestPause/serial/SecondStartNoReconfiguration 11.04
365 TestStoppedBinaryUpgrade/Setup 0.73
366 TestStoppedBinaryUpgrade/Upgrade 284.47
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
376 TestNoKubernetes/serial/StartWithK8s 24.09
384 TestNetworkPlugins/group/false 3.73
385 TestNoKubernetes/serial/StartWithStopK8s 23.12
389 TestNoKubernetes/serial/Start 7.24
390 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
391 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
392 TestNoKubernetes/serial/ProfileList 3.32
394 TestStartStop/group/old-k8s-version/serial/FirstStart 49.69
395 TestNoKubernetes/serial/Stop 3.35
396 TestNoKubernetes/serial/StartNoArgs 6.69
397 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
399 TestStartStop/group/no-preload/serial/FirstStart 46.76
400 TestStartStop/group/old-k8s-version/serial/DeployApp 7.32
402 TestStartStop/group/old-k8s-version/serial/Stop 15.95
403 TestStartStop/group/no-preload/serial/DeployApp 9.23
405 TestStartStop/group/no-preload/serial/Stop 16.35
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
407 TestStartStop/group/old-k8s-version/serial/SecondStart 50.12
408 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
409 TestStartStop/group/no-preload/serial/SecondStart 43.82
411 TestStartStop/group/embed-certs/serial/FirstStart 45.61
412 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
414 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.72
415 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
417 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
418 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
420 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
421 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
423 TestStartStop/group/embed-certs/serial/DeployApp 7.31
425 TestStartStop/group/newest-cni/serial/FirstStart 26.14
426 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
428 TestNetworkPlugins/group/auto/Start 41.11
429 TestStartStop/group/embed-certs/serial/Stop 16.48
431 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.23
432 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
433 TestStartStop/group/embed-certs/serial/SecondStart 51.09
434 TestStartStop/group/newest-cni/serial/DeployApp 0
436 TestStartStop/group/newest-cni/serial/Stop 2.44
437 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
438 TestStartStop/group/newest-cni/serial/SecondStart 12.09
439 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
440 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.08
441 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
442 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
443 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
445 TestNetworkPlugins/group/auto/KubeletFlags 0.36
446 TestNetworkPlugins/group/auto/NetCatPod 9.25
447 TestNetworkPlugins/group/kindnet/Start 37.74
448 TestNetworkPlugins/group/auto/DNS 0.11
449 TestNetworkPlugins/group/auto/Localhost 0.09
450 TestNetworkPlugins/group/auto/HairPin 0.09
451 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
452 TestNetworkPlugins/group/calico/Start 50.64
453 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
454 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
455 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
457 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
458 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
459 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
461 TestNetworkPlugins/group/custom-flannel/Start 51.38
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
463 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
464 TestNetworkPlugins/group/enable-default-cni/Start 43.06
465 TestNetworkPlugins/group/kindnet/DNS 0.15
466 TestNetworkPlugins/group/kindnet/Localhost 0.1
467 TestNetworkPlugins/group/kindnet/HairPin 0.11
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/flannel/Start 45.78
470 TestNetworkPlugins/group/calico/KubeletFlags 0.35
471 TestNetworkPlugins/group/calico/NetCatPod 10.22
472 TestNetworkPlugins/group/calico/DNS 0.12
473 TestNetworkPlugins/group/calico/Localhost 0.09
474 TestNetworkPlugins/group/calico/HairPin 0.09
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
477 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
478 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.21
479 TestNetworkPlugins/group/custom-flannel/DNS 0.14
480 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
481 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
482 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
483 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
484 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
485 TestNetworkPlugins/group/bridge/Start 58.21
486 TestNetworkPlugins/group/flannel/ControllerPod 6.01
487 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
488 TestNetworkPlugins/group/flannel/NetCatPod 8.2
489 TestNetworkPlugins/group/flannel/DNS 0.11
490 TestNetworkPlugins/group/flannel/Localhost 0.09
491 TestNetworkPlugins/group/flannel/HairPin 0.09
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
493 TestNetworkPlugins/group/bridge/NetCatPod 9.17
494 TestNetworkPlugins/group/bridge/DNS 0.11
495 TestNetworkPlugins/group/bridge/Localhost 0.09
496 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.445627324s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 22:25:42.921907    8660 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 22:25:42.922005    8660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-751103
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-751103: exit status 85 (70.60338ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-751103 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:38.531132    8672 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:38.531240    8672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:38.531248    8672 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:38.531252    8672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:38.531467    8672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	W1210 22:25:38.531592    8672 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22061-5100/.minikube/config/config.json: open /home/jenkins/minikube-integration/22061-5100/.minikube/config/config.json: no such file or directory
	I1210 22:25:38.532065    8672 out.go:368] Setting JSON to true
	I1210 22:25:38.532931    8672 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":481,"bootTime":1765405058,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:38.532987    8672 start.go:143] virtualization: kvm guest
	I1210 22:25:38.537687    8672 out.go:99] [download-only-751103] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1210 22:25:38.537856    8672 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 22:25:38.537901    8672 notify.go:221] Checking for updates...
	I1210 22:25:38.539488    8672 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:25:38.541138    8672 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:38.542458    8672 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:25:38.543880    8672 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:25:38.545364    8672 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:25:38.547796    8672 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:25:38.548036    8672 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:38.572810    8672 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:25:38.572895    8672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:38.804763    8672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 22:25:38.79462053 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:38.804869    8672 docker.go:319] overlay module found
	I1210 22:25:38.806515    8672 out.go:99] Using the docker driver based on user configuration
	I1210 22:25:38.806535    8672 start.go:309] selected driver: docker
	I1210 22:25:38.806540    8672 start.go:927] validating driver "docker" against <nil>
	I1210 22:25:38.806608    8672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:38.862805    8672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 22:25:38.854204413 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:38.862978    8672 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:38.863501    8672 start_flags.go:425] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 22:25:38.863713    8672 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:25:38.865810    8672 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-751103 host does not exist
	  To start a cluster, run: "minikube start -p download-only-751103"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-751103
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-488286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-488286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.952007691s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 22:25:46.322913    8660 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 22:25:46.322947    8660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-488286
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-488286: exit status 85 (71.9462ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-751103 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-751103                                                                                                                                                   │ download-only-751103 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ -o=json --download-only -p download-only-488286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-488286 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:43.422956    9027 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:43.423244    9027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:43.423255    9027 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:43.423259    9027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:43.423525    9027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:25:43.424027    9027 out.go:368] Setting JSON to true
	I1210 22:25:43.424897    9027 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":485,"bootTime":1765405058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:43.424950    9027 start.go:143] virtualization: kvm guest
	I1210 22:25:43.427063    9027 out.go:99] [download-only-488286] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:25:43.427252    9027 notify.go:221] Checking for updates...
	I1210 22:25:43.428566    9027 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:25:43.429938    9027 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:43.431210    9027 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:25:43.432424    9027 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:25:43.433574    9027 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:25:43.435935    9027 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:25:43.436189    9027 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:43.458989    9027 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:25:43.459095    9027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:43.514554    9027 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-10 22:25:43.505321837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:43.514669    9027 docker.go:319] overlay module found
	I1210 22:25:43.516319    9027 out.go:99] Using the docker driver based on user configuration
	I1210 22:25:43.516351    9027 start.go:309] selected driver: docker
	I1210 22:25:43.516359    9027 start.go:927] validating driver "docker" against <nil>
	I1210 22:25:43.516432    9027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:43.572327    9027 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-10 22:25:43.562158548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:43.572525    9027 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:43.573240    9027 start_flags.go:425] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 22:25:43.573443    9027 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:25:43.575276    9027 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-488286 host does not exist
	  To start a cluster, run: "minikube start -p download-only-488286"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-488286
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-033871 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-033871 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.338876272s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 22:25:50.108315    8660 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 22:25:50.108361    8660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-033871
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-033871: exit status 85 (75.914477ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-751103 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-751103 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-751103                                                                                                                                                          │ download-only-751103 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ -o=json --download-only -p download-only-488286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-488286 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ delete  │ -p download-only-488286                                                                                                                                                          │ download-only-488286 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │ 10 Dec 25 22:25 UTC │
	│ start   │ -o=json --download-only -p download-only-033871 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-033871 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:46.821395    9391 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:46.822042    9391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:46.822051    9391 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:46.822056    9391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:46.822223    9391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:25:46.822708    9391 out.go:368] Setting JSON to true
	I1210 22:25:46.823475    9391 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":489,"bootTime":1765405058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:46.823527    9391 start.go:143] virtualization: kvm guest
	I1210 22:25:46.825502    9391 out.go:99] [download-only-033871] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:25:46.825687    9391 notify.go:221] Checking for updates...
	I1210 22:25:46.827205    9391 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:25:46.828984    9391 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:46.830285    9391 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:25:46.831563    9391 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:25:46.832834    9391 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:25:46.835282    9391 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:25:46.835557    9391 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:46.860271    9391 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:25:46.860352    9391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:46.919000    9391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-10 22:25:46.910008706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:46.919115    9391 docker.go:319] overlay module found
	I1210 22:25:46.920549    9391 out.go:99] Using the docker driver based on user configuration
	I1210 22:25:46.920573    9391 start.go:309] selected driver: docker
	I1210 22:25:46.920578    9391 start.go:927] validating driver "docker" against <nil>
	I1210 22:25:46.920667    9391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:25:46.975354    9391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-10 22:25:46.966510499 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:25:46.975506    9391 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:46.976005    9391 start_flags.go:425] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 22:25:46.976153    9391 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:25:46.978172    9391 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-033871 host does not exist
	  To start a cluster, run: "minikube start -p download-only-033871"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-033871
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-950186 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-950186" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-950186
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 22:25:51.397093    8660 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-479778 --alsologtostderr --binary-mirror http://127.0.0.1:46291 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-479778" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-479778
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (54.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-615390 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-615390 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (49.058397867s)
helpers_test.go:176: Cleaning up "offline-crio-615390" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-615390
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-615390: (5.413717904s)
--- PASS: TestOffline (54.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-713277
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-713277: exit status 85 (61.759944ms)

                                                
                                                
-- stdout --
	* Profile "addons-713277" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-713277"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-713277
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-713277: exit status 85 (63.168105ms)

                                                
                                                
-- stdout --
	* Profile "addons-713277" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-713277"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (121.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-713277 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-713277 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m1.294684673s)
--- PASS: TestAddons/Setup (121.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-713277 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-713277 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-713277 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-713277 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [11388322-7f1e-4c85-84e7-f8e3566769a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [11388322-7f1e-4c85-84e7-f8e3566769a7] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003122594s
addons_test.go:696: (dbg) Run:  kubectl --context addons-713277 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-713277 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-713277 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-713277
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-713277: (16.433373533s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-713277
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-713277
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-713277
--- PASS: TestAddons/StoppedEnableDisable (16.73s)

                                                
                                    
x
+
TestCertOptions (26.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-062370 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-062370 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.712570627s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-062370 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-062370 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-062370 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-062370" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-062370
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-062370: (2.471427277s)
--- PASS: TestCertOptions (26.84s)

                                                
                                    
x
+
TestCertExpiration (218.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669067 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669067 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.879892073s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-669067 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1210 23:02:54.240978    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-669067 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.181572231s)
helpers_test.go:176: Cleaning up "cert-expiration-669067" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-669067
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-669067: (2.613022066s)
--- PASS: TestCertExpiration (218.68s)

                                                
                                    
x
+
TestForceSystemdFlag (25.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-725815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-725815 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.917716357s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-725815 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-725815" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-725815
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-725815: (2.432085629s)
--- PASS: TestForceSystemdFlag (25.65s)

                                                
                                    
x
+
TestForceSystemdEnv (33.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-634162 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-634162 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.454027672s)
helpers_test.go:176: Cleaning up "force-systemd-env-634162" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-634162
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-634162: (2.499697953s)
--- PASS: TestForceSystemdEnv (33.95s)

                                                
                                    
x
+
TestErrorSpam/setup (18.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-071656 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-071656 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-071656 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-071656 --driver=docker  --container-runtime=crio: (18.743487183s)
--- PASS: TestErrorSpam/setup (18.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (6.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause: exit status 80 (2.340601472s)

                                                
                                                
-- stdout --
	* Pausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause: exit status 80 (1.954081196s)

                                                
                                                
-- stdout --
	* Pausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause: exit status 80 (2.272021836s)

                                                
                                                
-- stdout --
	* Pausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause: exit status 80 (1.796731073s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause: exit status 80 (1.743475434s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause: exit status 80 (1.535201996s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-071656 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T22:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.08s)

                                                
                                    
x
+
TestErrorSpam/stop (12.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 stop: (12.42413255s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-071656 --log_dir /tmp/nospam-071656 stop
--- PASS: TestErrorSpam/stop (12.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/test/nested/copy/8660/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-345678 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.616941299s)
--- PASS: TestFunctional/serial/StartWithProxy (39.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 22:32:28.736121    8660 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-345678 --alsologtostderr -v=8: (6.170660078s)
functional_test.go:678: soft start took 6.171393919s for "functional-345678" cluster.
I1210 22:32:34.907211    8660 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-345678 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345678 /tmp/TestFunctionalserialCacheCmdcacheadd_local1801503756/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache add minikube-local-cache-test:functional-345678
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache delete minikube-local-cache-test:functional-345678
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345678
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.272344ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 kubectl -- --context functional-345678 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-345678 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 22:32:54.250014    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.256466    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.267862    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.289229    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.330660    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.412095    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.573620    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:54.895340    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:55.537420    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:56.818996    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:32:59.381862    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:04.503395    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:14.745673    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:35.227757    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-345678 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.787012894s)
functional_test.go:776: restart took 1m7.787136133s for "functional-345678" cluster.
I1210 22:33:48.580778    8660 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (67.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-345678 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-345678 logs: (1.292721433s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 logs --file /tmp/TestFunctionalserialLogsFileCmd3505855536/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-345678 logs --file /tmp/TestFunctionalserialLogsFileCmd3505855536/001/logs.txt: (1.285625219s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (6.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-345678 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-345678
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-345678: exit status 115 (348.034919ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31829 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-345678 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-345678 delete -f testdata/invalidsvc.yaml: (3.320620153s)
--- PASS: TestFunctional/serial/InvalidService (6.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 config get cpus: exit status 14 (83.45246ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 config get cpus: exit status 14 (79.241834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345678 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-345678 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 47147: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.847891ms)

                                                
                                                
-- stdout --
	* [functional-345678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:34:25.002886   47292 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:34:25.002978   47292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:34:25.002986   47292 out.go:374] Setting ErrFile to fd 2...
	I1210 22:34:25.002990   47292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:34:25.003224   47292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:34:25.004095   47292 out.go:368] Setting JSON to false
	I1210 22:34:25.005243   47292 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1007,"bootTime":1765405058,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:34:25.005300   47292 start.go:143] virtualization: kvm guest
	I1210 22:34:25.007511   47292 out.go:179] * [functional-345678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:34:25.008783   47292 notify.go:221] Checking for updates...
	I1210 22:34:25.008835   47292 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:34:25.010431   47292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:34:25.012081   47292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:34:25.013408   47292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:34:25.014755   47292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:34:25.018884   47292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:34:25.020956   47292 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:34:25.021681   47292 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:34:25.047587   47292 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:34:25.047744   47292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:34:25.117465   47292 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 22:34:25.105192578 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:34:25.117603   47292 docker.go:319] overlay module found
	I1210 22:34:25.119678   47292 out.go:179] * Using the docker driver based on existing profile
	I1210 22:34:25.120958   47292 start.go:309] selected driver: docker
	I1210 22:34:25.120976   47292 start.go:927] validating driver "docker" against &{Name:functional-345678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-345678 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:34:25.121138   47292 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:34:25.123035   47292 out.go:203] 
	W1210 22:34:25.124279   47292 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 22:34:25.125437   47292 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-345678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-345678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.023267ms)

                                                
                                                
-- stdout --
	* [functional-345678] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:34:25.454037   47500 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:34:25.454176   47500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:34:25.454187   47500 out.go:374] Setting ErrFile to fd 2...
	I1210 22:34:25.454193   47500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:34:25.454567   47500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:34:25.455108   47500 out.go:368] Setting JSON to false
	I1210 22:34:25.456224   47500 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1007,"bootTime":1765405058,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:34:25.456279   47500 start.go:143] virtualization: kvm guest
	I1210 22:34:25.458188   47500 out.go:179] * [functional-345678] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 22:34:25.459770   47500 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:34:25.459777   47500 notify.go:221] Checking for updates...
	I1210 22:34:25.461198   47500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:34:25.462614   47500 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:34:25.463926   47500 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:34:25.465137   47500 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:34:25.468908   47500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:34:25.470827   47500 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:34:25.471507   47500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:34:25.500378   47500 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:34:25.500456   47500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:34:25.567993   47500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 22:34:25.556093774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:34:25.568160   47500 docker.go:319] overlay module found
	I1210 22:34:25.570078   47500 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 22:34:25.571280   47500 start.go:309] selected driver: docker
	I1210 22:34:25.571292   47500 start.go:927] validating driver "docker" against &{Name:functional-345678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-345678 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:34:25.571385   47500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:34:25.573502   47500 out.go:203] 
	W1210 22:34:25.574766   47500 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 22:34:25.575970   47500 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-345678 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-345678 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-zd9p9" [3fe99075-3f53-4d7b-940a-ec586916bba5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-zd9p9" [3fe99075-3f53-4d7b-940a-ec586916bba5] Running
E1210 22:34:16.189671    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1210 22:34:18.292586    8660 retry.go:31] will retry after 2.627284995s: Temporary Error: Get "http://10.99.210.142": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004189154s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32725
functional_test.go:1680: http://192.168.49.2:32725: success! body:
Request served by hello-node-connect-7d85dfc575-zd9p9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32725
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a594130e-466f-4f74-bebb-ff779f545b9c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005188s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-345678 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-345678 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-345678 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345678 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:34:07.108673    8660 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2664d50c-af3c-4f37-89fe-447133a3a37f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. not found)
helpers_test.go:353: "sp-pod" [2664d50c-af3c-4f37-89fe-447133a3a37f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [2664d50c-af3c-4f37-89fe-447133a3a37f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.003377477s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-345678 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-345678 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-345678 delete -f testdata/storage-provisioner/pod.yaml: (1.127477909s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-345678 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:34:37.463026    8660 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ee47a649-b697-4c1b-89c7-28b9fa317c75] Pending
helpers_test.go:353: "sp-pod" [ee47a649-b697-4c1b-89c7-28b9fa317c75] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003106311s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-345678 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh -n functional-345678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cp functional-345678:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2723707139/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh -n functional-345678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh -n functional-345678 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-345678 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-kxq5f" [4b287d4e-47bb-46f0-98c4-586760a78649] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-kxq5f" [4b287d4e-47bb-46f0-98c4-586760a78649] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.00363019s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;": exit status 1 (136.607793ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:34:19.031287    8660 retry.go:31] will retry after 1.003238092s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;": exit status 1 (103.979364ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:34:20.139190    8660 retry.go:31] will retry after 1.541496527s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;": exit status 1 (105.668139ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:34:21.786759    8660 retry.go:31] will retry after 3.043640848s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-345678 exec mysql-6bcdcbc558-kxq5f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8660/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /etc/test/nested/copy/8660/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8660.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /etc/ssl/certs/8660.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8660.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /usr/share/ca-certificates/8660.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/86602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /etc/ssl/certs/86602.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/86602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /usr/share/ca-certificates/86602.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-345678 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "sudo systemctl is-active docker": exit status 1 (321.148537ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "sudo systemctl is-active containerd": exit status 1 (315.28064ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345678 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-345678
localhost/kicbase/echo-server:functional-345678
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345678 image ls --format short --alsologtostderr:
I1210 22:34:26.828878   48026 out.go:360] Setting OutFile to fd 1 ...
I1210 22:34:26.829164   48026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:26.829173   48026 out.go:374] Setting ErrFile to fd 2...
I1210 22:34:26.829178   48026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:26.829462   48026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:34:26.830263   48026 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:26.830413   48026 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:26.831018   48026 cli_runner.go:164] Run: docker container inspect functional-345678 --format={{.State.Status}}
I1210 22:34:26.856295   48026 ssh_runner.go:195] Run: systemctl --version
I1210 22:34:26.856440   48026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345678
I1210 22:34:26.877782   48026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-345678/id_rsa Username:docker}
I1210 22:34:26.985091   48026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345678 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-345678  │ 53c26b8cde3a3 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-345678  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345678 image ls --format table --alsologtostderr:
I1210 22:34:29.345129   49039 out.go:360] Setting OutFile to fd 1 ...
I1210 22:34:29.345229   49039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:29.345238   49039 out.go:374] Setting ErrFile to fd 2...
I1210 22:34:29.345242   49039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:29.345446   49039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:34:29.346003   49039 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:29.346109   49039 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:29.347221   49039 cli_runner.go:164] Run: docker container inspect functional-345678 --format={{.State.Status}}
I1210 22:34:29.366844   49039 ssh_runner.go:195] Run: systemctl --version
I1210 22:34:29.366911   49039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345678
I1210 22:34:29.384603   49039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-345678/id_rsa Username:docker}
I1210 22:34:29.482441   49039 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345678 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io
/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb95
37b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-345678"],"size":"4945246"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busy
box:1.28.4-glibc"],"size":"4631262"},{"id":"53c26b8cde3a3340c65dd3ff0e260a6f11e471e83610295baab52a4c6a256cba","repoDigests":["localhost/minikube-local-cache-test@sha256:69a6023e93cfa9b61697448367211a35646bf978511eb243f4ceee6921c04bbb"],"repoTags":["localhost/minikube-local-cache-test:functional-345678"],"size":"3330"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storag
e-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb760
5d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c
7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345678 image ls --format json --alsologtostderr:
I1210 22:34:29.117153   48961 out.go:360] Setting OutFile to fd 1 ...
I1210 22:34:29.117467   48961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:29.117477   48961 out.go:374] Setting ErrFile to fd 2...
I1210 22:34:29.117482   48961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:29.117761   48961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:34:29.118425   48961 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:29.118517   48961 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:29.118999   48961 cli_runner.go:164] Run: docker container inspect functional-345678 --format={{.State.Status}}
I1210 22:34:29.137746   48961 ssh_runner.go:195] Run: systemctl --version
I1210 22:34:29.137805   48961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345678
I1210 22:34:29.157067   48961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-345678/id_rsa Username:docker}
I1210 22:34:29.255163   48961 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345678 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-345678
size: "4945246"
- id: 53c26b8cde3a3340c65dd3ff0e260a6f11e471e83610295baab52a4c6a256cba
repoDigests:
- localhost/minikube-local-cache-test@sha256:69a6023e93cfa9b61697448367211a35646bf978511eb243f4ceee6921c04bbb
repoTags:
- localhost/minikube-local-cache-test:functional-345678
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345678 image ls --format yaml --alsologtostderr:
I1210 22:34:27.414703   48092 out.go:360] Setting OutFile to fd 1 ...
I1210 22:34:27.414965   48092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:27.414975   48092 out.go:374] Setting ErrFile to fd 2...
I1210 22:34:27.414979   48092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:27.415212   48092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:34:27.415904   48092 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:27.416005   48092 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:27.416421   48092 cli_runner.go:164] Run: docker container inspect functional-345678 --format={{.State.Status}}
I1210 22:34:27.436664   48092 ssh_runner.go:195] Run: systemctl --version
I1210 22:34:27.436729   48092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345678
I1210 22:34:27.456599   48092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-345678/id_rsa Username:docker}
I1210 22:34:27.552202   48092 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh pgrep buildkitd: exit status 1 (281.130715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image build -t localhost/my-image:functional-345678 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-345678 image build -t localhost/my-image:functional-345678 testdata/build --alsologtostderr: (2.372660185s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-345678 image build -t localhost/my-image:functional-345678 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 76e64c7e889
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-345678
--> 58b25d6894c
Successfully tagged localhost/my-image:functional-345678
58b25d6894cbe9dd172b7c729b9bd767b35f65ca7e512ce2156e3349d83ab6e6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-345678 image build -t localhost/my-image:functional-345678 testdata/build --alsologtostderr:
I1210 22:34:27.946422   48300 out.go:360] Setting OutFile to fd 1 ...
I1210 22:34:27.946570   48300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:27.946577   48300 out.go:374] Setting ErrFile to fd 2...
I1210 22:34:27.946582   48300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:34:27.946881   48300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:34:27.947745   48300 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:27.948678   48300 config.go:182] Loaded profile config "functional-345678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:34:27.949389   48300 cli_runner.go:164] Run: docker container inspect functional-345678 --format={{.State.Status}}
I1210 22:34:27.971493   48300 ssh_runner.go:195] Run: systemctl --version
I1210 22:34:27.971544   48300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345678
I1210 22:34:27.994406   48300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-345678/id_rsa Username:docker}
I1210 22:34:28.098711   48300 build_images.go:162] Building image from path: /tmp/build.172930235.tar
I1210 22:34:28.098793   48300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 22:34:28.108820   48300 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.172930235.tar
I1210 22:34:28.113396   48300 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.172930235.tar: stat -c "%s %y" /var/lib/minikube/build/build.172930235.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.172930235.tar': No such file or directory
I1210 22:34:28.113432   48300 ssh_runner.go:362] scp /tmp/build.172930235.tar --> /var/lib/minikube/build/build.172930235.tar (3072 bytes)
I1210 22:34:28.137920   48300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.172930235
I1210 22:34:28.148110   48300 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.172930235 -xf /var/lib/minikube/build/build.172930235.tar
I1210 22:34:28.159009   48300 crio.go:315] Building image: /var/lib/minikube/build/build.172930235
I1210 22:34:28.159076   48300 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-345678 /var/lib/minikube/build/build.172930235 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 22:34:30.216366   48300 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-345678 /var/lib/minikube/build/build.172930235 --cgroup-manager=cgroupfs: (2.057270364s)
I1210 22:34:30.216427   48300 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.172930235
I1210 22:34:30.224696   48300 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.172930235.tar
I1210 22:34:30.233154   48300 build_images.go:218] Built localhost/my-image:functional-345678 from /tmp/build.172930235.tar
I1210 22:34:30.233276   48300 build_images.go:134] succeeded building to: functional-345678
I1210 22:34:30.233290   48300 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-345678
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image load --daemon kicbase/echo-server:functional-345678 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-345678 image load --daemon kicbase/echo-server:functional-345678 --alsologtostderr: (1.057450358s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-345678 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-345678 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-fnn5r" [71798cdb-22ea-4441-9a09-26b2b88b78d0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-fnn5r" [71798cdb-22ea-4441-9a09-26b2b88b78d0] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004209066s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image load --daemon kicbase/echo-server:functional-345678 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 43250: os: process already finished
helpers_test.go:526: unable to kill pid 43035: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-345678 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [58468989-c09e-49c2-b91a-b3ce146d9846] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [58468989-c09e-49c2-b91a-b3ce146d9846] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.004246998s
I1210 22:34:08.204575    8660 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-345678
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image load --daemon kicbase/echo-server:functional-345678 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image save kicbase/echo-server:functional-345678 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-345678 image save kicbase/echo-server:functional-345678 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.032634084s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image rm kicbase/echo-server:functional-345678 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-345678
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 image save --daemon kicbase/echo-server:functional-345678 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-345678
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service list -o json
functional_test.go:1504: Took "597.341412ms" to run "out/minikube-linux-amd64 -p functional-345678 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32665
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-345678 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.210.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32665
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-345678 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "371.886331ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.417334ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdany-port2078862835/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765406061029052253" to /tmp/TestFunctionalparallelMountCmdany-port2078862835/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765406061029052253" to /tmp/TestFunctionalparallelMountCmdany-port2078862835/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765406061029052253" to /tmp/TestFunctionalparallelMountCmdany-port2078862835/001/test-1765406061029052253
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.398656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:34:21.324767    8660 retry.go:31] will retry after 400.64409ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 22:34 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 22:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 22:34 test-1765406061029052253
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh cat /mount-9p/test-1765406061029052253
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345678 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [9892b72f-2f4a-4363-9396-bb41e4a20fee] Pending
helpers_test.go:353: "busybox-mount" [9892b72f-2f4a-4363-9396-bb41e4a20fee] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [9892b72f-2f4a-4363-9396-bb41e4a20fee] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [9892b72f-2f4a-4363-9396-bb41e4a20fee] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00424294s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345678 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdany-port2078862835/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "349.23823ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.783301ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdspecific-port3152552599/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p"
2025/12/10 22:34:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.270913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:34:29.107846    8660 retry.go:31] will retry after 346.619778ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdspecific-port3152552599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "sudo umount -f /mount-9p": exit status 1 (280.191666ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-345678 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdspecific-port3152552599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T" /mount1: exit status 1 (331.579287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:34:30.857413    8660 retry.go:31] will retry after 573.909335ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-345678 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-345678 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-345678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3742291955/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-345678
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-345678
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345678
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-5100/.minikube/files/etc/test/nested/copy/8660/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-174200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (39.944641424s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (39.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 22:35:27.639483    8660 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-174200 --alsologtostderr -v=8: (6.251840777s)
functional_test.go:678: soft start took 6.252184433s for "functional-174200" cluster.
I1210 22:35:33.891675    8660 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-174200 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2562505677/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache add minikube-local-cache-test:functional-174200
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache delete minikube-local-cache-test:functional-174200
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh sudo crictl rmi registry.k8s.io/pause:latest
E1210 22:35:38.111038    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.404249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 kubectl -- --context functional-174200 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-174200 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-174200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.392205507s)
functional_test.go:776: restart took 39.392304611s for "functional-174200" cluster.
I1210 22:36:19.208503    8660 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-174200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 logs: (1.219845005s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1644734921/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1644734921/001/logs.txt: (1.264371491s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-174200 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-174200
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-174200: exit status 115 (349.08783ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30245 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-174200 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 config get cpus: exit status 14 (72.945441ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 config get cpus: exit status 14 (83.151058ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174200 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174200 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 65658: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174200 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (170.91154ms)

                                                
                                                
-- stdout --
	* [functional-174200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:36:36.303181   63788 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:36:36.303414   63788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:36:36.303422   63788 out.go:374] Setting ErrFile to fd 2...
	I1210 22:36:36.303426   63788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:36:36.303637   63788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:36:36.304054   63788 out.go:368] Setting JSON to false
	I1210 22:36:36.305104   63788 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1138,"bootTime":1765405058,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:36:36.305156   63788 start.go:143] virtualization: kvm guest
	I1210 22:36:36.307260   63788 out.go:179] * [functional-174200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:36:36.308757   63788 notify.go:221] Checking for updates...
	I1210 22:36:36.308767   63788 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:36:36.310207   63788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:36:36.311388   63788 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:36:36.312610   63788 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:36:36.313896   63788 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:36:36.315177   63788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:36:36.317511   63788 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 22:36:36.318129   63788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:36:36.344992   63788 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:36:36.345119   63788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:36:36.408022   63788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 22:36:36.397826178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:36:36.408135   63788 docker.go:319] overlay module found
	I1210 22:36:36.409603   63788 out.go:179] * Using the docker driver based on existing profile
	I1210 22:36:36.410747   63788 start.go:309] selected driver: docker
	I1210 22:36:36.410763   63788 start.go:927] validating driver "docker" against &{Name:functional-174200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-174200 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:36:36.410881   63788 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:36:36.412842   63788 out.go:203] 
	W1210 22:36:36.413937   63788 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 22:36:36.415073   63788 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174200 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174200 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (166.444444ms)

                                                
                                                
-- stdout --
	* [functional-174200] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:36:36.740000   64225 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:36:36.740121   64225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:36:36.740131   64225 out.go:374] Setting ErrFile to fd 2...
	I1210 22:36:36.740135   64225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:36:36.740444   64225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:36:36.740930   64225 out.go:368] Setting JSON to false
	I1210 22:36:36.741926   64225 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1139,"bootTime":1765405058,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:36:36.741989   64225 start.go:143] virtualization: kvm guest
	I1210 22:36:36.743766   64225 out.go:179] * [functional-174200] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 22:36:36.745124   64225 notify.go:221] Checking for updates...
	I1210 22:36:36.745148   64225 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:36:36.746608   64225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:36:36.748098   64225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 22:36:36.749421   64225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 22:36:36.750611   64225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:36:36.751841   64225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:36:36.753513   64225 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 22:36:36.754040   64225 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:36:36.779427   64225 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 22:36:36.779509   64225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:36:36.836724   64225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 22:36:36.826484426 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:36:36.836822   64225 docker.go:319] overlay module found
	I1210 22:36:36.838633   64225 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 22:36:36.839828   64225 start.go:309] selected driver: docker
	I1210 22:36:36.839841   64225 start.go:927] validating driver "docker" against &{Name:functional-174200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-174200 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:36:36.839939   64225 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:36:36.841624   64225 out.go:203] 
	W1210 22:36:36.842861   64225 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 22:36:36.844032   64225 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-174200 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-174200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-52mq9" [ec49905e-b9eb-46db-984b-cc1e5cd182b7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-52mq9" [ec49905e-b9eb-46db-984b-cc1e5cd182b7] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004226784s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31437
functional_test.go:1680: http://192.168.49.2:31437: success! body:
Request served by hello-node-connect-9f67c86d4-52mq9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31437
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (22.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [01241b42-5d06-4d3d-a319-4ceac3d82be3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003157406s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-174200 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-174200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-174200 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-174200 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:36:33.133182    8660 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [362961bd-13b9-47dd-a9e7-8ceb35d8f253] Pending
helpers_test.go:353: "sp-pod" [362961bd-13b9-47dd-a9e7-8ceb35d8f253] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [362961bd-13b9-47dd-a9e7-8ceb35d8f253] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003099815s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-174200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-174200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-174200 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:36:42.868553    8660 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [07f2f065-fdad-419b-82c6-2190fd6e509e] Pending
helpers_test.go:353: "sp-pod" [07f2f065-fdad-419b-82c6-2190fd6e509e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.041568707s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-174200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (22.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh -n functional-174200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cp functional-174200:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp608023478/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh -n functional-174200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh -n functional-174200 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-174200 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-m6qpq" [96eda56a-5ac2-4718-b2f4-4bc965fc1f9d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-m6qpq" [96eda56a-5ac2-4718-b2f4-4bc965fc1f9d] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 18.003974493s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;": exit status 1 (89.975904ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:36:59.333311    8660 retry.go:31] will retry after 947.229896ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;": exit status 1 (112.975182ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:37:00.394370    8660 retry.go:31] will retry after 1.028016844s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;": exit status 1 (91.0368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:37:01.513779    8660 retry.go:31] will retry after 2.780389758s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-174200 exec mysql-7d7b65bc95-m6qpq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8660/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /etc/test/nested/copy/8660/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8660.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /etc/ssl/certs/8660.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8660.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /usr/share/ca-certificates/8660.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/86602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /etc/ssl/certs/86602.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/86602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /usr/share/ca-certificates/86602.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-174200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "sudo systemctl is-active docker": exit status 1 (323.700247ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "sudo systemctl is-active containerd": exit status 1 (318.674582ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image ls --format table --alsologtostderr: (2.175629555s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174200 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-174200  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-174200  │ 53c26b8cde3a3 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174200 image ls --format table --alsologtostderr:
I1210 22:36:49.998975   68294 out.go:360] Setting OutFile to fd 1 ...
I1210 22:36:49.999307   68294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.999322   68294 out.go:374] Setting ErrFile to fd 2...
I1210 22:36:49.999329   68294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.999663   68294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:36:50.000478   68294 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:50.000638   68294 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:50.001280   68294 cli_runner.go:164] Run: docker container inspect functional-174200 --format={{.State.Status}}
I1210 22:36:50.023016   68294 ssh_runner.go:195] Run: systemctl --version
I1210 22:36:50.023075   68294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-174200
I1210 22:36:50.045288   68294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-174200/id_rsa Username:docker}
I1210 22:36:50.149520   68294 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:36:52.092574   68294 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.943008631s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh pgrep buildkitd: exit status 1 (335.478835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image build -t localhost/my-image:functional-174200 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image build -t localhost/my-image:functional-174200 testdata/build --alsologtostderr: (4.609524986s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174200 image build -t localhost/my-image:functional-174200 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0da89280e1f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-174200
--> a7aa1995904
Successfully tagged localhost/my-image:functional-174200
a7aa1995904a6236bcbf983579b51b55e7c1fc5a9c754fa0c5829f4ce683c6aa
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174200 image build -t localhost/my-image:functional-174200 testdata/build --alsologtostderr:
I1210 22:36:49.507823   68227 out.go:360] Setting OutFile to fd 1 ...
I1210 22:36:49.508106   68227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.508119   68227 out.go:374] Setting ErrFile to fd 2...
I1210 22:36:49.508126   68227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:36:49.508341   68227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
I1210 22:36:49.509037   68227 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:49.509809   68227 config.go:182] Loaded profile config "functional-174200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:36:49.510488   68227 cli_runner.go:164] Run: docker container inspect functional-174200 --format={{.State.Status}}
I1210 22:36:49.532250   68227 ssh_runner.go:195] Run: systemctl --version
I1210 22:36:49.532319   68227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-174200
I1210 22:36:49.554919   68227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/functional-174200/id_rsa Username:docker}
I1210 22:36:49.659519   68227 build_images.go:162] Building image from path: /tmp/build.633276946.tar
I1210 22:36:49.659593   68227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 22:36:49.670252   68227 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.633276946.tar
I1210 22:36:49.674766   68227 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.633276946.tar: stat -c "%s %y" /var/lib/minikube/build/build.633276946.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.633276946.tar': No such file or directory
I1210 22:36:49.674811   68227 ssh_runner.go:362] scp /tmp/build.633276946.tar --> /var/lib/minikube/build/build.633276946.tar (3072 bytes)
I1210 22:36:49.697982   68227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.633276946
I1210 22:36:49.707886   68227 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.633276946 -xf /var/lib/minikube/build/build.633276946.tar
I1210 22:36:49.718420   68227 crio.go:315] Building image: /var/lib/minikube/build/build.633276946
I1210 22:36:49.718521   68227 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-174200 /var/lib/minikube/build/build.633276946 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 22:36:54.018532   68227 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-174200 /var/lib/minikube/build/build.633276946 --cgroup-manager=cgroupfs: (4.299978132s)
I1210 22:36:54.018608   68227 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.633276946
I1210 22:36:54.026912   68227 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.633276946.tar
I1210 22:36:54.034657   68227 build_images.go:218] Built localhost/my-image:functional-174200 from /tmp/build.633276946.tar
I1210 22:36:54.034692   68227 build_images.go:134] succeeded building to: functional-174200
I1210 22:36:54.034698   68227 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image load --daemon kicbase/echo-server:functional-174200 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-174200 image load --daemon kicbase/echo-server:functional-174200 --alsologtostderr: (1.048557846s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-174200 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-174200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-lfgpb" [4d8dfe96-0a77-45ed-adae-4c31ef4f96ed] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-lfgpb" [4d8dfe96-0a77-45ed-adae-4c31ef4f96ed] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003734378s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image load --daemon kicbase/echo-server:functional-174200 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-174200
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image load --daemon kicbase/echo-server:functional-174200 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image save kicbase/echo-server:functional-174200 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image rm kicbase/echo-server:functional-174200 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-174200
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 image save --daemon kicbase/echo-server:functional-174200 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 62213: os: process already finished
helpers_test.go:520: unable to terminate pid 62024: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-174200 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a069a3b0-963d-40c7-af4a-e93e60ecea65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a069a3b0-963d-40c7-af4a-e93e60ecea65] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00399918s
I1210 22:36:40.884928    8660 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service list -o json
functional_test.go:1504: Took "507.251866ms" to run "out/minikube-linux-amd64 -p functional-174200 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31836
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "363.491375ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.416859ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "352.653858ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.440202ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31836
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1115871465/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765406196416897084" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1115871465/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765406196416897084" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1115871465/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765406196416897084" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1115871465/001/test-1765406196416897084
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.176959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:36:36.734390    8660 retry.go:31] will retry after 406.45726ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 22:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 22:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 22:36 test-1765406196416897084
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh cat /mount-9p/test-1765406196416897084
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-174200 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [e37cb2a4-45ba-4b6b-aa58-38ed847c16df] Pending
helpers_test.go:353: "busybox-mount" [e37cb2a4-45ba-4b6b-aa58-38ed847c16df] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [e37cb2a4-45ba-4b6b-aa58-38ed847c16df] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [e37cb2a4-45ba-4b6b-aa58-38ed847c16df] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003436106s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-174200 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1115871465/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-174200 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.160.177 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-174200 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1673176412/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.186262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:36:43.685749    8660 retry.go:31] will retry after 622.340428ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1673176412/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "sudo umount -f /mount-9p": exit status 1 (378.91511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-174200 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1673176412/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T" /mount1
2025/12/10 22:36:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T" /mount1: exit status 1 (467.629704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:36:45.962371    8660 retry.go:31] will retry after 535.063911ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174200 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-174200 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174200 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3133183177/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-174200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (109.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 22:37:54.242087    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:21.953310    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m48.62335618s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (109.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- rollout status deployment/busybox
E1210 22:38:59.229333    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.235768    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.247127    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.268504    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.309909    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.391722    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.553352    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:38:59.875000    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:39:00.516432    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 kubectl -- rollout status deployment/busybox: (3.9077938s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-5mtr8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-bj4b2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-h9zc5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-5mtr8 -- nslookup kubernetes.default
E1210 22:39:01.797802    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-bj4b2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-h9zc5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-5mtr8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-bj4b2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-h9zc5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-5mtr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-5mtr8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-bj4b2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-bj4b2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-h9zc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 kubectl -- exec busybox-7b57f96db7-h9zc5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node add --alsologtostderr -v 5
E1210 22:39:04.360032    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:39:09.481820    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:39:19.723470    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 node add --alsologtostderr -v 5: (25.793929766s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-758057 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp testdata/cp-test.txt ha-758057:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile139857127/001/cp-test_ha-758057.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057:/home/docker/cp-test.txt ha-758057-m02:/home/docker/cp-test_ha-758057_ha-758057-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test_ha-758057_ha-758057-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057:/home/docker/cp-test.txt ha-758057-m03:/home/docker/cp-test_ha-758057_ha-758057-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test_ha-758057_ha-758057-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057:/home/docker/cp-test.txt ha-758057-m04:/home/docker/cp-test_ha-758057_ha-758057-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test_ha-758057_ha-758057-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp testdata/cp-test.txt ha-758057-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile139857127/001/cp-test_ha-758057-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m02:/home/docker/cp-test.txt ha-758057:/home/docker/cp-test_ha-758057-m02_ha-758057.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test_ha-758057-m02_ha-758057.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m02:/home/docker/cp-test.txt ha-758057-m03:/home/docker/cp-test_ha-758057-m02_ha-758057-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test_ha-758057-m02_ha-758057-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m02:/home/docker/cp-test.txt ha-758057-m04:/home/docker/cp-test_ha-758057-m02_ha-758057-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test_ha-758057-m02_ha-758057-m04.txt"
E1210 22:39:40.205339    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp testdata/cp-test.txt ha-758057-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile139857127/001/cp-test_ha-758057-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m03:/home/docker/cp-test.txt ha-758057:/home/docker/cp-test_ha-758057-m03_ha-758057.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test_ha-758057-m03_ha-758057.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m03:/home/docker/cp-test.txt ha-758057-m02:/home/docker/cp-test_ha-758057-m03_ha-758057-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test_ha-758057-m03_ha-758057-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m03:/home/docker/cp-test.txt ha-758057-m04:/home/docker/cp-test_ha-758057-m03_ha-758057-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test_ha-758057-m03_ha-758057-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp testdata/cp-test.txt ha-758057-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile139857127/001/cp-test_ha-758057-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m04:/home/docker/cp-test.txt ha-758057:/home/docker/cp-test_ha-758057-m04_ha-758057.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057 "sudo cat /home/docker/cp-test_ha-758057-m04_ha-758057.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m04:/home/docker/cp-test.txt ha-758057-m02:/home/docker/cp-test_ha-758057-m04_ha-758057-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m02 "sudo cat /home/docker/cp-test_ha-758057-m04_ha-758057-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 cp ha-758057-m04:/home/docker/cp-test.txt ha-758057-m03:/home/docker/cp-test_ha-758057-m04_ha-758057-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 ssh -n ha-758057-m03 "sudo cat /home/docker/cp-test_ha-758057-m04_ha-758057-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 node stop m02 --alsologtostderr -v 5: (12.632280772s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5: exit status 7 (696.559566ms)

                                                
                                                
-- stdout --
	ha-758057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-758057-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-758057-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-758057-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:40:00.963949   88693 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:40:00.964254   88693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:40:00.964267   88693 out.go:374] Setting ErrFile to fd 2...
	I1210 22:40:00.964273   88693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:40:00.964563   88693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:40:00.964824   88693 out.go:368] Setting JSON to false
	I1210 22:40:00.964862   88693 mustload.go:66] Loading cluster: ha-758057
	I1210 22:40:00.964972   88693 notify.go:221] Checking for updates...
	I1210 22:40:00.965367   88693 config.go:182] Loaded profile config "ha-758057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:40:00.965386   88693 status.go:174] checking status of ha-758057 ...
	I1210 22:40:00.966054   88693 cli_runner.go:164] Run: docker container inspect ha-758057 --format={{.State.Status}}
	I1210 22:40:00.985654   88693 status.go:371] ha-758057 host status = "Running" (err=<nil>)
	I1210 22:40:00.985690   88693 host.go:66] Checking if "ha-758057" exists ...
	I1210 22:40:00.985949   88693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-758057
	I1210 22:40:01.005357   88693 host.go:66] Checking if "ha-758057" exists ...
	I1210 22:40:01.005664   88693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:40:01.005721   88693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-758057
	I1210 22:40:01.024146   88693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/ha-758057/id_rsa Username:docker}
	I1210 22:40:01.117063   88693 ssh_runner.go:195] Run: systemctl --version
	I1210 22:40:01.123239   88693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:40:01.136395   88693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:40:01.195261   88693 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 22:40:01.184338571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:40:01.195816   88693 kubeconfig.go:125] found "ha-758057" server: "https://192.168.49.254:8443"
	I1210 22:40:01.195845   88693 api_server.go:166] Checking apiserver status ...
	I1210 22:40:01.195884   88693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:40:01.209187   88693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1210 22:40:01.217667   88693 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 22:40:01.217718   88693 ssh_runner.go:195] Run: ls
	I1210 22:40:01.221375   88693 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 22:40:01.225444   88693 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 22:40:01.225465   88693 status.go:463] ha-758057 apiserver status = Running (err=<nil>)
	I1210 22:40:01.225474   88693 status.go:176] ha-758057 status: &{Name:ha-758057 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:40:01.225488   88693 status.go:174] checking status of ha-758057-m02 ...
	I1210 22:40:01.225754   88693 cli_runner.go:164] Run: docker container inspect ha-758057-m02 --format={{.State.Status}}
	I1210 22:40:01.244551   88693 status.go:371] ha-758057-m02 host status = "Stopped" (err=<nil>)
	I1210 22:40:01.244571   88693 status.go:384] host is not running, skipping remaining checks
	I1210 22:40:01.244576   88693 status.go:176] ha-758057-m02 status: &{Name:ha-758057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:40:01.244595   88693 status.go:174] checking status of ha-758057-m03 ...
	I1210 22:40:01.244850   88693 cli_runner.go:164] Run: docker container inspect ha-758057-m03 --format={{.State.Status}}
	I1210 22:40:01.262498   88693 status.go:371] ha-758057-m03 host status = "Running" (err=<nil>)
	I1210 22:40:01.262523   88693 host.go:66] Checking if "ha-758057-m03" exists ...
	I1210 22:40:01.262834   88693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-758057-m03
	I1210 22:40:01.281118   88693 host.go:66] Checking if "ha-758057-m03" exists ...
	I1210 22:40:01.281360   88693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:40:01.281401   88693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-758057-m03
	I1210 22:40:01.299371   88693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/ha-758057-m03/id_rsa Username:docker}
	I1210 22:40:01.393588   88693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:40:01.406380   88693 kubeconfig.go:125] found "ha-758057" server: "https://192.168.49.254:8443"
	I1210 22:40:01.406405   88693 api_server.go:166] Checking apiserver status ...
	I1210 22:40:01.406435   88693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:40:01.418131   88693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W1210 22:40:01.427183   88693 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 22:40:01.427246   88693 ssh_runner.go:195] Run: ls
	I1210 22:40:01.431246   88693 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 22:40:01.435202   88693 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 22:40:01.435223   88693 status.go:463] ha-758057-m03 apiserver status = Running (err=<nil>)
	I1210 22:40:01.435230   88693 status.go:176] ha-758057-m03 status: &{Name:ha-758057-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:40:01.435243   88693 status.go:174] checking status of ha-758057-m04 ...
	I1210 22:40:01.435483   88693 cli_runner.go:164] Run: docker container inspect ha-758057-m04 --format={{.State.Status}}
	I1210 22:40:01.453734   88693 status.go:371] ha-758057-m04 host status = "Running" (err=<nil>)
	I1210 22:40:01.453754   88693 host.go:66] Checking if "ha-758057-m04" exists ...
	I1210 22:40:01.454033   88693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-758057-m04
	I1210 22:40:01.471569   88693 host.go:66] Checking if "ha-758057-m04" exists ...
	I1210 22:40:01.471843   88693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:40:01.471886   88693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-758057-m04
	I1210 22:40:01.490098   88693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/ha-758057-m04/id_rsa Username:docker}
	I1210 22:40:01.582954   88693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:40:01.598483   88693 status.go:176] ha-758057-m04 status: &{Name:ha-758057-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 node start m02 --alsologtostderr -v 5: (13.215878833s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 stop --alsologtostderr -v 5
E1210 22:40:21.167526    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 stop --alsologtostderr -v 5: (49.217068111s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 start --wait true --alsologtostderr -v 5
E1210 22:41:26.256286    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.262887    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.274331    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.295866    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.337434    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.418948    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.580756    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:26.902790    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:27.544254    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:28.825811    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:31.388103    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:36.509412    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:43.089223    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:41:46.751578    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:07.233288    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 start --wait true --alsologtostderr -v 5: (1m9.735663474s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 node delete m03 --alsologtostderr -v 5: (9.758332863s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (49.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 stop --alsologtostderr -v 5
E1210 22:42:48.195788    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:54.243883    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 stop --alsologtostderr -v 5: (49.437069712s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5: exit status 7 (119.171291ms)

                                                
                                                
-- stdout --
	ha-758057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-758057-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-758057-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:43:17.245788  103077 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:43:17.245887  103077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:43:17.245892  103077 out.go:374] Setting ErrFile to fd 2...
	I1210 22:43:17.245895  103077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:43:17.246118  103077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:43:17.246279  103077 out.go:368] Setting JSON to false
	I1210 22:43:17.246314  103077 mustload.go:66] Loading cluster: ha-758057
	I1210 22:43:17.246469  103077 notify.go:221] Checking for updates...
	I1210 22:43:17.246679  103077 config.go:182] Loaded profile config "ha-758057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:43:17.246694  103077 status.go:174] checking status of ha-758057 ...
	I1210 22:43:17.247130  103077 cli_runner.go:164] Run: docker container inspect ha-758057 --format={{.State.Status}}
	I1210 22:43:17.267272  103077 status.go:371] ha-758057 host status = "Stopped" (err=<nil>)
	I1210 22:43:17.267308  103077 status.go:384] host is not running, skipping remaining checks
	I1210 22:43:17.267314  103077 status.go:176] ha-758057 status: &{Name:ha-758057 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:43:17.267340  103077 status.go:174] checking status of ha-758057-m02 ...
	I1210 22:43:17.267606  103077 cli_runner.go:164] Run: docker container inspect ha-758057-m02 --format={{.State.Status}}
	I1210 22:43:17.285793  103077 status.go:371] ha-758057-m02 host status = "Stopped" (err=<nil>)
	I1210 22:43:17.285845  103077 status.go:384] host is not running, skipping remaining checks
	I1210 22:43:17.285858  103077 status.go:176] ha-758057-m02 status: &{Name:ha-758057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:43:17.285888  103077 status.go:174] checking status of ha-758057-m04 ...
	I1210 22:43:17.286268  103077 cli_runner.go:164] Run: docker container inspect ha-758057-m04 --format={{.State.Status}}
	I1210 22:43:17.304380  103077 status.go:371] ha-758057-m04 host status = "Stopped" (err=<nil>)
	I1210 22:43:17.304427  103077 status.go:384] host is not running, skipping remaining checks
	I1210 22:43:17.304441  103077 status.go:176] ha-758057-m04 status: &{Name:ha-758057-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (49.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 22:43:59.230130    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.76401572s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
E1210 22:44:10.117341    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (64.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 node add --control-plane --alsologtostderr -v 5
E1210 22:44:26.930567    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-758057 node add --control-plane --alsologtostderr -v 5: (1m3.882113901s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-758057 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (64.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-283371 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1210 22:46:26.256636    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-283371 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m11.209652195s)
--- PASS: TestJSONOutput/start/Command (71.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.19s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-283371 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-283371 --output=json --user=testUser: (6.193703025s)
--- PASS: TestJSONOutput/stop/Command (6.19s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-245723 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-245723 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.089867ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f227274-4533-4b44-a093-04d9d3c4984e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-245723] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d8f1fb3-ac36-4018-9cb8-d2d17d529ce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22061"}}
	{"specversion":"1.0","id":"1ef1de09-c6d5-4a5f-9e5d-aaf519e6519a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d076eb51-11f8-41a9-8787-570155e4019b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig"}}
	{"specversion":"1.0","id":"ea941c53-d386-4d43-8895-7bca610606b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube"}}
	{"specversion":"1.0","id":"16f38da8-9bb8-4afc-98fc-c74b71dd6707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0578dc3d-185e-43b1-859f-7f948bc45230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e2bdba14-e959-40c5-a7e1-5372a7082a7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-245723" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-245723
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-620363 --network=
E1210 22:46:53.960186    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-620363 --network=: (28.060274725s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-620363" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-620363
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-620363: (2.137882824s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-732058 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-732058 --network=bridge: (19.786603634s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-732058" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-732058
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-732058: (2.010715385s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.82s)

                                                
                                    
x
+
TestKicExistingNetwork (25.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 22:47:43.132078    8660 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 22:47:43.149401    8660 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 22:47:43.149466    8660 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 22:47:43.149484    8660 cli_runner.go:164] Run: docker network inspect existing-network
W1210 22:47:43.167772    8660 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 22:47:43.167804    8660 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 22:47:43.167829    8660 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 22:47:43.167983    8660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 22:47:43.185764    8660 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9b209413b2be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:8b:00:34:a3:7b} reservation:<nil>}
I1210 22:47:43.186128    8660 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ad51f0}
I1210 22:47:43.186174    8660 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 22:47:43.186216    8660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 22:47:43.232831    8660 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-852050 --network=existing-network
E1210 22:47:54.243812    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-852050 --network=existing-network: (23.134702269s)
helpers_test.go:176: Cleaning up "existing-network-852050" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-852050
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-852050: (2.067308048s)
I1210 22:48:08.453230    8660 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.34s)

                                                
                                    
x
+
TestKicCustomSubnet (26.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-620156 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-620156 --subnet=192.168.60.0/24: (24.13620688s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-620156 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-620156" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-620156
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-620156: (2.136416986s)
--- PASS: TestKicCustomSubnet (26.29s)

                                                
                                    
x
+
TestKicStaticIP (28.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-124855 --static-ip=192.168.200.200
E1210 22:48:59.232248    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-124855 --static-ip=192.168.200.200: (25.895430042s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-124855 ip
helpers_test.go:176: Cleaning up "static-ip-124855" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-124855
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-124855: (2.119649327s)
--- PASS: TestKicStaticIP (28.16s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-352756 --driver=docker  --container-runtime=crio
E1210 22:49:17.314785    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-352756 --driver=docker  --container-runtime=crio: (21.556424316s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-355539 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-355539 --driver=docker  --container-runtime=crio: (22.279392198s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-352756
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-355539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-355539" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-355539
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-355539: (2.35183502s)
helpers_test.go:176: Cleaning up "first-352756" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-352756
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-352756: (2.326288386s)
--- PASS: TestMinikubeProfile (49.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-019064 --memory=3072 --mount-string /tmp/TestMountStartserial659641149/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-019064 --memory=3072 --mount-string /tmp/TestMountStartserial659641149/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.685471221s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-019064 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-032862 --memory=3072 --mount-string /tmp/TestMountStartserial659641149/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-032862 --memory=3072 --mount-string /tmp/TestMountStartserial659641149/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.659540534s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-019064 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-019064 --alsologtostderr -v=5: (1.685061969s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-032862
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-032862: (1.249119494s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-032862
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-032862: (6.205347562s)
--- PASS: TestMountStart/serial/RestartStopped (7.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032862 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-111304 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-111304 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.95309365s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- rollout status deployment/busybox
E1210 22:51:26.255383    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-111304 -- rollout status deployment/busybox: (2.081433462s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-mmjhn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-xfjhl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-mmjhn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-xfjhl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-mmjhn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-xfjhl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-mmjhn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-mmjhn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-xfjhl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-111304 -- exec busybox-7b57f96db7-xfjhl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-111304 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-111304 -v=5 --alsologtostderr: (52.613442533s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-111304 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp testdata/cp-test.txt multinode-111304:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2546623722/001/cp-test_multinode-111304.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304:/home/docker/cp-test.txt multinode-111304-m02:/home/docker/cp-test_multinode-111304_multinode-111304-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test_multinode-111304_multinode-111304-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304:/home/docker/cp-test.txt multinode-111304-m03:/home/docker/cp-test_multinode-111304_multinode-111304-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test_multinode-111304_multinode-111304-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp testdata/cp-test.txt multinode-111304-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2546623722/001/cp-test_multinode-111304-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m02:/home/docker/cp-test.txt multinode-111304:/home/docker/cp-test_multinode-111304-m02_multinode-111304.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test_multinode-111304-m02_multinode-111304.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m02:/home/docker/cp-test.txt multinode-111304-m03:/home/docker/cp-test_multinode-111304-m02_multinode-111304-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test_multinode-111304-m02_multinode-111304-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp testdata/cp-test.txt multinode-111304-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2546623722/001/cp-test_multinode-111304-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m03:/home/docker/cp-test.txt multinode-111304:/home/docker/cp-test_multinode-111304-m03_multinode-111304.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304 "sudo cat /home/docker/cp-test_multinode-111304-m03_multinode-111304.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 cp multinode-111304-m03:/home/docker/cp-test.txt multinode-111304-m02:/home/docker/cp-test_multinode-111304-m03_multinode-111304-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 ssh -n multinode-111304-m02 "sudo cat /home/docker/cp-test_multinode-111304-m03_multinode-111304-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-111304 node stop m03: (1.260233545s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-111304 status: exit status 7 (489.553573ms)

                                                
                                                
-- stdout --
	multinode-111304
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-111304-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-111304-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr: exit status 7 (493.065761ms)

                                                
                                                
-- stdout --
	multinode-111304
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-111304-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-111304-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:52:35.178061  163056 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:52:35.178320  163056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:52:35.178331  163056 out.go:374] Setting ErrFile to fd 2...
	I1210 22:52:35.178337  163056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:52:35.178569  163056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:52:35.178770  163056 out.go:368] Setting JSON to false
	I1210 22:52:35.178801  163056 mustload.go:66] Loading cluster: multinode-111304
	I1210 22:52:35.178921  163056 notify.go:221] Checking for updates...
	I1210 22:52:35.179213  163056 config.go:182] Loaded profile config "multinode-111304": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:52:35.179230  163056 status.go:174] checking status of multinode-111304 ...
	I1210 22:52:35.179677  163056 cli_runner.go:164] Run: docker container inspect multinode-111304 --format={{.State.Status}}
	I1210 22:52:35.200370  163056 status.go:371] multinode-111304 host status = "Running" (err=<nil>)
	I1210 22:52:35.200434  163056 host.go:66] Checking if "multinode-111304" exists ...
	I1210 22:52:35.200830  163056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-111304
	I1210 22:52:35.218312  163056 host.go:66] Checking if "multinode-111304" exists ...
	I1210 22:52:35.218581  163056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:52:35.218658  163056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-111304
	I1210 22:52:35.236600  163056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/multinode-111304/id_rsa Username:docker}
	I1210 22:52:35.329900  163056 ssh_runner.go:195] Run: systemctl --version
	I1210 22:52:35.336380  163056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:52:35.348533  163056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 22:52:35.405028  163056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-10 22:52:35.394488821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 22:52:35.405824  163056 kubeconfig.go:125] found "multinode-111304" server: "https://192.168.67.2:8443"
	I1210 22:52:35.405852  163056 api_server.go:166] Checking apiserver status ...
	I1210 22:52:35.405885  163056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:52:35.417523  163056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	W1210 22:52:35.425758  163056 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 22:52:35.425824  163056 ssh_runner.go:195] Run: ls
	I1210 22:52:35.429395  163056 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 22:52:35.435769  163056 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 22:52:35.435790  163056 status.go:463] multinode-111304 apiserver status = Running (err=<nil>)
	I1210 22:52:35.435799  163056 status.go:176] multinode-111304 status: &{Name:multinode-111304 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:52:35.435814  163056 status.go:174] checking status of multinode-111304-m02 ...
	I1210 22:52:35.436045  163056 cli_runner.go:164] Run: docker container inspect multinode-111304-m02 --format={{.State.Status}}
	I1210 22:52:35.453356  163056 status.go:371] multinode-111304-m02 host status = "Running" (err=<nil>)
	I1210 22:52:35.453379  163056 host.go:66] Checking if "multinode-111304-m02" exists ...
	I1210 22:52:35.453626  163056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-111304-m02
	I1210 22:52:35.470715  163056 host.go:66] Checking if "multinode-111304-m02" exists ...
	I1210 22:52:35.471030  163056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:52:35.471074  163056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-111304-m02
	I1210 22:52:35.488707  163056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/22061-5100/.minikube/machines/multinode-111304-m02/id_rsa Username:docker}
	I1210 22:52:35.582069  163056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:52:35.594617  163056 status.go:176] multinode-111304-m02 status: &{Name:multinode-111304-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:52:35.594664  163056 status.go:174] checking status of multinode-111304-m03 ...
	I1210 22:52:35.594911  163056 cli_runner.go:164] Run: docker container inspect multinode-111304-m03 --format={{.State.Status}}
	I1210 22:52:35.612405  163056 status.go:371] multinode-111304-m03 host status = "Stopped" (err=<nil>)
	I1210 22:52:35.612425  163056 status.go:384] host is not running, skipping remaining checks
	I1210 22:52:35.612431  163056 status.go:176] multinode-111304-m03 status: &{Name:multinode-111304-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-111304 node start m03 -v=5 --alsologtostderr: (6.456064322s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-111304
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-111304
E1210 22:52:54.244983    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-111304: (31.385574933s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-111304 --wait=true -v=5 --alsologtostderr
E1210 22:53:59.229619    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-111304 --wait=true -v=5 --alsologtostderr: (52.146006831s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-111304
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-111304 node delete m03: (4.639859776s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-111304 stop: (28.381123288s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-111304 status: exit status 7 (98.241204ms)

                                                
                                                
-- stdout --
	multinode-111304
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-111304-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr: exit status 7 (95.503688ms)

                                                
                                                
-- stdout --
	multinode-111304
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-111304-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:54:40.183462  172868 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:54:40.183733  172868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:54:40.183744  172868 out.go:374] Setting ErrFile to fd 2...
	I1210 22:54:40.183750  172868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:54:40.183951  172868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:54:40.184119  172868 out.go:368] Setting JSON to false
	I1210 22:54:40.184152  172868 mustload.go:66] Loading cluster: multinode-111304
	I1210 22:54:40.184256  172868 notify.go:221] Checking for updates...
	I1210 22:54:40.184581  172868 config.go:182] Loaded profile config "multinode-111304": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:54:40.184597  172868 status.go:174] checking status of multinode-111304 ...
	I1210 22:54:40.185042  172868 cli_runner.go:164] Run: docker container inspect multinode-111304 --format={{.State.Status}}
	I1210 22:54:40.203182  172868 status.go:371] multinode-111304 host status = "Stopped" (err=<nil>)
	I1210 22:54:40.203219  172868 status.go:384] host is not running, skipping remaining checks
	I1210 22:54:40.203233  172868 status.go:176] multinode-111304 status: &{Name:multinode-111304 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:54:40.203274  172868 status.go:174] checking status of multinode-111304-m02 ...
	I1210 22:54:40.203552  172868 cli_runner.go:164] Run: docker container inspect multinode-111304-m02 --format={{.State.Status}}
	I1210 22:54:40.221336  172868 status.go:371] multinode-111304-m02 host status = "Stopped" (err=<nil>)
	I1210 22:54:40.221353  172868 status.go:384] host is not running, skipping remaining checks
	I1210 22:54:40.221362  172868 status.go:176] multinode-111304-m02 status: &{Name:multinode-111304-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-111304 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 22:55:22.292571    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-111304 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.323164765s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-111304 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-111304
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-111304-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-111304-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.120837ms)

                                                
                                                
-- stdout --
	* [multinode-111304-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-111304-m02' is duplicated with machine name 'multinode-111304-m02' in profile 'multinode-111304'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-111304-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-111304-m03 --driver=docker  --container-runtime=crio: (18.979459737s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-111304
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-111304: exit status 80 (287.778695ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-111304 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-111304-m03 already exists in multinode-111304-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-111304-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-111304-m03: (2.341192943s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.75s)

                                                
                                    
x
+
TestPreload (98.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-293031 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1210 22:56:26.255449    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-293031 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (45.406411731s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-293031 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-293031 image pull gcr.io/k8s-minikube/busybox: (1.468022228s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-293031
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-293031: (6.189680429s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-293031 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-293031 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.196743882s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-293031 image list
helpers_test.go:176: Cleaning up "test-preload-293031" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-293031
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-293031: (2.390281457s)
--- PASS: TestPreload (98.88s)

                                                
                                    
x
+
TestScheduledStopUnix (95.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-230539 --memory=3072 --driver=docker  --container-runtime=crio
E1210 22:57:49.322007    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-230539 --memory=3072 --driver=docker  --container-runtime=crio: (19.215611006s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-230539 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 22:57:52.228702  189962 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:57:52.228792  189962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:57:52.228800  189962 out.go:374] Setting ErrFile to fd 2...
	I1210 22:57:52.228804  189962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:57:52.228970  189962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:57:52.229211  189962 out.go:368] Setting JSON to false
	I1210 22:57:52.229298  189962 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:57:52.229588  189962 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:57:52.229670  189962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/config.json ...
	I1210 22:57:52.229846  189962 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:57:52.229940  189962 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-230539 -n scheduled-stop-230539
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 22:57:52.610431  190127 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:57:52.610716  190127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:57:52.610726  190127 out.go:374] Setting ErrFile to fd 2...
	I1210 22:57:52.610730  190127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:57:52.610904  190127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:57:52.611126  190127 out.go:368] Setting JSON to false
	I1210 22:57:52.611311  190127 daemonize_unix.go:73] killing process 189998 as it is an old scheduled stop
	I1210 22:57:52.611428  190127 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:57:52.611909  190127 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:57:52.612004  190127 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/config.json ...
	I1210 22:57:52.612216  190127 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:57:52.612341  190127 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 22:57:52.616818    8660 retry.go:31] will retry after 124.586µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.617978    8660 retry.go:31] will retry after 75.523µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.619103    8660 retry.go:31] will retry after 273.79µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.620235    8660 retry.go:31] will retry after 342.317µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.621348    8660 retry.go:31] will retry after 560.781µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.622469    8660 retry.go:31] will retry after 520.103µs: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.623590    8660 retry.go:31] will retry after 1.266258ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.625742    8660 retry.go:31] will retry after 1.486528ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.627931    8660 retry.go:31] will retry after 2.684438ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.631108    8660 retry.go:31] will retry after 5.683379ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.637309    8660 retry.go:31] will retry after 5.419632ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.643518    8660 retry.go:31] will retry after 8.677913ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.652697    8660 retry.go:31] will retry after 17.25162ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.670924    8660 retry.go:31] will retry after 18.792133ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.690158    8660 retry.go:31] will retry after 30.190952ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
I1210 22:57:52.721468    8660 retry.go:31] will retry after 54.50642ms: open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-230539 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1210 22:57:54.240896    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/addons-713277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-230539 -n scheduled-stop-230539
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-230539
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-230539 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 22:58:18.535734  190771 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:58:18.535845  190771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:58:18.535853  190771 out.go:374] Setting ErrFile to fd 2...
	I1210 22:58:18.535857  190771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:58:18.536062  190771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 22:58:18.536316  190771 out.go:368] Setting JSON to false
	I1210 22:58:18.536392  190771 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:58:18.536722  190771 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:58:18.536784  190771 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/scheduled-stop-230539/config.json ...
	I1210 22:58:18.536968  190771 mustload.go:66] Loading cluster: scheduled-stop-230539
	I1210 22:58:18.537064  190771 config.go:182] Loaded profile config "scheduled-stop-230539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1210 22:58:59.231658    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-230539
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-230539: exit status 7 (81.194362ms)

                                                
                                                
-- stdout --
	scheduled-stop-230539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-230539 -n scheduled-stop-230539
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-230539 -n scheduled-stop-230539: exit status 7 (79.537093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-230539" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-230539
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-230539: (4.421700349s)
--- PASS: TestScheduledStopUnix (95.17s)

                                                
                                    
x
+
TestInsufficientStorage (11.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-351646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-351646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.325877281s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e603ace1-cd41-42ed-9e38-c4defb0267f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-351646] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3593d219-6d89-4b62-bc28-76c272ee5f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22061"}}
	{"specversion":"1.0","id":"6487b752-d1f4-4d95-ab66-2d322d866620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c8a63cd-1982-43bb-9556-295ee2400ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig"}}
	{"specversion":"1.0","id":"645a801b-626a-46db-8f3c-ba7fb8349e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube"}}
	{"specversion":"1.0","id":"1c3e66e5-ec82-4f1e-b60a-f9c17f48c175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ee282e4e-644d-4623-8a2e-4b4c7bad9c06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dd75253c-d4cb-4e85-b950-40f93dd1bc60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"49af84d9-1bee-4bb4-9210-c649d719ae7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7ac3f4e1-8a41-43e2-8aff-b9117847b9f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba428330-e463-498c-9f14-10fe5cd6a96c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b845053a-3c99-4695-b5b5-62ce4e17e2ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-351646\" primary control-plane node in \"insufficient-storage-351646\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8e6881b-1c64-42fa-9578-6b5c9aa04c44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"18175c22-88a7-40f6-a363-5090c23c8850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"08aeaae3-d450-400e-9120-2a3ac9befadb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-351646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-351646 --output=json --layout=cluster: exit status 7 (289.935164ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-351646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-351646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 22:59:17.730578  193291 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-351646" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-351646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-351646 --output=json --layout=cluster: exit status 7 (286.359694ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-351646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-351646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 22:59:18.017122  193415 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-351646" does not appear in /home/jenkins/minikube-integration/22061-5100/kubeconfig
	E1210 22:59:18.027520  193415 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/insufficient-storage-351646/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-351646" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-351646
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-351646: (1.89698151s)
--- PASS: TestInsufficientStorage (11.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (44.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3758387044 start -p running-upgrade-733685 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3758387044 start -p running-upgrade-733685 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.296873128s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-733685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-733685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.806101876s)
helpers_test.go:176: Cleaning up "running-upgrade-733685" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-733685
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-733685: (2.405264008s)
--- PASS: TestRunningBinaryUpgrade (44.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (299.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.317528648s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-000011
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-000011: (1.306778302s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-000011 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-000011 status --format={{.Host}}: exit status 7 (83.909234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.228763112s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-000011 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (82.988921ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-000011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-000011
	    minikube start -p kubernetes-upgrade-000011 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0000112 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-000011 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-000011 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.537800812s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-000011" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-000011
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-000011: (2.839311909s)
--- PASS: TestKubernetesUpgrade (299.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (93.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2131522122 start -p missing-upgrade-628477 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2131522122 start -p missing-upgrade-628477 --memory=3072 --driver=docker  --container-runtime=crio: (44.363998178s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-628477
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-628477: (10.411477547s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-628477
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-628477 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-628477 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.773322289s)
helpers_test.go:176: Cleaning up "missing-upgrade-628477" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-628477
E1210 23:01:26.255757    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-628477: (2.379482849s)
--- PASS: TestMissingContainerUpgrade (93.71s)

                                                
                                    
x
+
TestPause/serial/Start (49.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-615194 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-615194 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.38706857s)
--- PASS: TestPause/serial/Start (49.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (11.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-615194 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-615194 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (11.026974329s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (11.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (284.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1449339124 start -p stopped-upgrade-679204 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1449339124 start -p stopped-upgrade-679204 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.99207747s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1449339124 -p stopped-upgrade-679204 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1449339124 -p stopped-upgrade-679204 stop: (4.245755847s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-679204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-679204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.224987416s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (284.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (88.565199ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-508535] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508535 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508535 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.687158454s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508535 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-177285 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-177285 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (206.708904ms)

                                                
                                                
-- stdout --
	* [false-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:03:02.349313  241585 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:03:02.349629  241585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:02.349656  241585 out.go:374] Setting ErrFile to fd 2...
	I1210 23:03:02.349664  241585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:03:02.349983  241585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5100/.minikube/bin
	I1210 23:03:02.350700  241585 out.go:368] Setting JSON to false
	I1210 23:03:02.352203  241585 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2724,"bootTime":1765405058,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:03:02.352288  241585 start.go:143] virtualization: kvm guest
	I1210 23:03:02.354300  241585 out.go:179] * [false-177285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:03:02.356138  241585 notify.go:221] Checking for updates...
	I1210 23:03:02.356513  241585 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:03:02.358202  241585 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:03:02.359902  241585 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5100/kubeconfig
	I1210 23:03:02.361437  241585 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5100/.minikube
	I1210 23:03:02.363622  241585 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:03:02.365094  241585 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:03:02.367353  241585 config.go:182] Loaded profile config "NoKubernetes-508535": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:03:02.367550  241585 config.go:182] Loaded profile config "kubernetes-upgrade-000011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 23:03:02.367713  241585 config.go:182] Loaded profile config "stopped-upgrade-679204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:03:02.367849  241585 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:03:02.399073  241585 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 23:03:02.399170  241585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:03:02.473173  241585 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 23:03:02.460852949 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652064256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 23:03:02.473281  241585 docker.go:319] overlay module found
	I1210 23:03:02.475157  241585 out.go:179] * Using the docker driver based on user configuration
	I1210 23:03:02.477174  241585 start.go:309] selected driver: docker
	I1210 23:03:02.477197  241585 start.go:927] validating driver "docker" against <nil>
	I1210 23:03:02.477212  241585 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:03:02.479346  241585 out.go:203] 
	W1210 23:03:02.481892  241585 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 23:03:02.483207  241585 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-177285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:03:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-508535
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:00:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-000011
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:01:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-679204
contexts:
- context:
cluster: NoKubernetes-508535
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:03:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-508535
name: NoKubernetes-508535
- context:
cluster: kubernetes-upgrade-000011
user: kubernetes-upgrade-000011
name: kubernetes-upgrade-000011
- context:
cluster: stopped-upgrade-679204
user: stopped-upgrade-679204
name: stopped-upgrade-679204
current-context: NoKubernetes-508535
kind: Config
users:
- name: NoKubernetes-508535
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/NoKubernetes-508535/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/NoKubernetes-508535/client.key
- name: kubernetes-upgrade-000011
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.key
- name: stopped-upgrade-679204
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-177285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-177285"

                                                
                                                
----------------------- debugLogs end: false-177285 [took: 3.342573188s] --------------------------------
helpers_test.go:176: Cleaning up "false-177285" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-177285
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.766820733s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508535 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-508535 status -o json: exit status 2 (327.094964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-508535","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-508535
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-508535: (2.027014228s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508535 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.240733142s)
--- PASS: TestNoKubernetes/serial/Start (7.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22061-5100/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.311816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.578441011s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.694035155s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-508535
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-508535: (3.350460705s)
--- PASS: TestNoKubernetes/serial/Stop (3.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508535 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508535 --driver=docker  --container-runtime=crio: (6.692093653s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.790355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 23:03:59.229534    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.762163899s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-280530 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [eef6ab3b-83eb-4097-a924-8a1b73986571] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [eef6ab3b-83eb-4097-a924-8a1b73986571] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003805115s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-280530 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-280530 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-280530 --alsologtostderr -v=3: (15.951214609s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-092439 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dd3bcee3-92a1-4c68-8569-badd5445456f] Pending
helpers_test.go:353: "busybox" [dd3bcee3-92a1-4c68-8569-badd5445456f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dd3bcee3-92a1-4c68-8569-badd5445456f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003960086s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-092439 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-092439 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-092439 --alsologtostderr -v=3: (16.35100983s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530: exit status 7 (80.070029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-280530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-280530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.744641738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-280530 -n old-k8s-version-280530
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439: exit status 7 (106.745322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-092439 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-092439 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (43.488176459s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-092439 -n no-preload-092439
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.606731094s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-679204
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-679204: (1.387086177s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (42.719261413s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-2ggd7" [2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004160236s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-2ggd7" [2dbe8a53-4cd0-40cc-b7ae-14f04b88b87e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003662621s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-280530 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6jlnk" [1ae48ac0-0434-4fba-b672-40e13b9c8c63] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003634416s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-280530 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6jlnk" [1ae48ac0-0434-4fba-b672-40e13b9c8c63] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003243191s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-092439 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-092439 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468067 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3e157d1d-e99f-4f73-a95d-a881d3d14cc4] Pending
helpers_test.go:353: "busybox" [3e157d1d-e99f-4f73-a95d-a881d3d14cc4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3e157d1d-e99f-4f73-a95d-a881d3d14cc4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004178536s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468067 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (26.14231618s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c0dc1efe-3497-4123-8574-5fff0265cf3e] Pending
helpers_test.go:353: "busybox" [c0dc1efe-3497-4123-8574-5fff0265cf3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c0dc1efe-3497-4123-8574-5fff0265cf3e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004736482s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.105051707s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-468067 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-468067 --alsologtostderr -v=3: (16.483466582s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-443884 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-443884 --alsologtostderr -v=3: (18.226640893s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067: exit status 7 (102.297229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-468067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1210 23:06:26.255865    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-174200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-468067 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (50.747649329s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468067 -n embed-certs-468067
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-852445 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-852445 --alsologtostderr -v=3: (2.437299267s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445: exit status 7 (90.552817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-852445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-852445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (11.558765371s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-852445 -n newest-cni-852445
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884: exit status 7 (125.185278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-443884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-443884 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (51.746993047s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-443884 -n default-k8s-diff-port-443884
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-852445 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-177285 "pgrep -a kubelet"
I1210 23:06:49.763090    8660 config.go:182] Loaded profile config "auto-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cw5nx" [dbec518e-cd17-4159-ba38-a1c1d5fc90e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cw5nx" [dbec518e-cd17-4159-ba38-a1c1d5fc90e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004288868s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (37.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (37.742192364s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (37.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4l5m7" [ceb10413-18a8-45d1-9707-8a032353a846] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003598181s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.636904941s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-4l5m7" [ceb10413-18a8-45d1-9707-8a032353a846] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004040917s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-468067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ptwlg" [9d8b1623-7dda-402d-9e59-9e72100ca713] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00345578s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-468067 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ptwlg" [9d8b1623-7dda-402d-9e59-9e72100ca713] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003387107s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-443884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-kb67v" [9b500ae7-b836-408b-a181-9b7813d2720e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008231932s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-443884 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.375185595s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-177285 "pgrep -a kubelet"
I1210 23:07:39.186526    8660 config.go:182] Loaded profile config "kindnet-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zn4hm" [9ee483d5-7e23-4ce8-9fe0-edb90583f0ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zn4hm" [9ee483d5-7e23-4ce8-9fe0-edb90583f0ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003712327s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (43.06219681s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-rs7wt" [99fb9f6d-bf69-48eb-b3fc-8d99e0c77c22] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-rs7wt" [99fb9f6d-bf69-48eb-b3fc-8d99e0c77c22] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003611048s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.783289066s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-177285 "pgrep -a kubelet"
I1210 23:08:15.845410    8660 config.go:182] Loaded profile config "calico-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-s9sps" [9d7f1cae-834e-4600-9b63-a751bdfae660] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-s9sps" [9d7f1cae-834e-4600-9b63-a751bdfae660] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004692205s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-177285 "pgrep -a kubelet"
I1210 23:08:29.528352    8660 config.go:182] Loaded profile config "custom-flannel-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7w7lv" [aa677e7b-916f-4a18-b737-fbb1a2c10c4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7w7lv" [aa677e7b-916f-4a18-b737-fbb1a2c10c4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00413029s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-177285 "pgrep -a kubelet"
I1210 23:08:33.177217    8660 config.go:182] Loaded profile config "enable-default-cni-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7cvfd" [ffbf943f-2876-40fe-a5b5-20a4bb43ee49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7cvfd" [ffbf943f-2876-40fe-a5b5-20a4bb43ee49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004294534s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-177285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (58.213744585s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-d7wlm" [8d710994-8652-45cc-a08b-7f0aebbf7f2d] Running
E1210 23:08:59.229630    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/functional-345678/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003550039s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-177285 "pgrep -a kubelet"
I1210 23:09:04.919600    8660 config.go:182] Loaded profile config "flannel-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-177285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cbgz5" [9bfdcd0d-f325-4cd4-a6e0-9f7cb53df592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cbgz5" [9bfdcd0d-f325-4cd4-a6e0-9f7cb53df592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.00406621s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-177285 "pgrep -a kubelet"
I1210 23:09:46.029179    8660 config.go:182] Loaded profile config "bridge-177285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-177285 replace --force -f testdata/netcat-deployment.yaml
E1210 23:09:46.046717    8660 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/no-preload-092439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tk77t" [0b6bb902-928f-45d9-aad7-cc705a88179b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tk77t" [0b6bb902-928f-45d9-aad7-cc705a88179b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004227973s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-177285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-177285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.24
379 TestNetworkPlugins/group/kubenet 3.68
388 TestNetworkPlugins/group/cilium 3.6
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-614588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-614588
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-177285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:00:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-000011
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:01:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-679204
contexts:
- context:
cluster: kubernetes-upgrade-000011
user: kubernetes-upgrade-000011
name: kubernetes-upgrade-000011
- context:
cluster: stopped-upgrade-679204
user: stopped-upgrade-679204
name: stopped-upgrade-679204
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-000011
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.key
- name: stopped-upgrade-679204
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-177285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-177285"

                                                
                                                
----------------------- debugLogs end: kubenet-177285 [took: 3.503323597s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-177285" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-177285
--- SKIP: TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-177285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-177285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:03:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-508535
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:00:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-000011
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5100/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:01:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-679204
contexts:
- context:
cluster: NoKubernetes-508535
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:03:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-508535
name: NoKubernetes-508535
- context:
cluster: kubernetes-upgrade-000011
user: kubernetes-upgrade-000011
name: kubernetes-upgrade-000011
- context:
cluster: stopped-upgrade-679204
user: stopped-upgrade-679204
name: stopped-upgrade-679204
current-context: NoKubernetes-508535
kind: Config
users:
- name: NoKubernetes-508535
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/NoKubernetes-508535/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/NoKubernetes-508535/client.key
- name: kubernetes-upgrade-000011
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/kubernetes-upgrade-000011/client.key
- name: stopped-upgrade-679204
user:
client-certificate: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.crt
client-key: /home/jenkins/minikube-integration/22061-5100/.minikube/profiles/stopped-upgrade-679204/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-177285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-177285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-177285"

                                                
                                                
----------------------- debugLogs end: cilium-177285 [took: 3.441931681s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-177285" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-177285
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
Copied to clipboard